Shifting Security Left
In this era of constant threat and attack on our computing systems, it seems like security requirements have become ever more ubiquitous and urgent. In reality, software and computing systems have always had the need for security. Fortunately, considering security as an integral part at a project's outset enables you to think about security concerns when they have the least cost and greatest impact and can help design a better solution, rather than just adding to your design and implementation burden. A security-first approach focuses on continuous monitoring and management of cloud risks and threats, utilizing modern automation tools and techniques thereby ensuring the organization is monitoring threats through real time discovery. This content will discuss concepts to keep in mind from the outset of your software project and provide some guidance and resources to facilitate your Serverless solution's path from concept through the obtaining of its Authority to Operate (ATO).
Intent Behind Shifting Left
The incorporation of security concerns at the start of a project and including security requirements into the design process for systems from the outset does a number of things:
- Lowers the cost of including required security solutions.
- Ensures that processes and the solutions that support them will operate securely.
- Reduces maintenance over the long-term by reducing the chance that security concerns are not addressed.
- Provides more time to ensure solutions support security concerns.
- Encourages the early development of the System Security Plan (SSP).
All of these items normalize the act of thinking about security in every step of the system development process. The sooner we work security into our designs, the cheaper security becomes, because we no longer have to make changes to designs or systems at later stages, where adding security could cause rework.
The earlier we introduce security the lower the overall cost of security, the reduction of rework, and the more secure the systems become.
Spelled out this way, including security on Day One into the processes for developing a system becomes a no-brainer. The only question that remains is how.
Security and Compliance
In order for a new software system to reach its users, it must obtain its Authority to Operate (ATO). In order to do this, the project owner and their team need to demonstrate:
- How the software system implements its functionality in a safe and secure manner
- How access to its constituent components (code, infrastructure, configuration, etc.) are managed and controlled to ensure the integrity and safety of the system
- How the system's components are deployed in a safe, secure, and repeatable manner
In broad terms, it is the responsibility of a project's Information System Security Officer (ISSO) to ensure that the processes implemented across the project are demonstrated to answer these concerns, as documented in the project's SSP and other security documentation. This can be a large task, however, as a project's security posture is interwoven into the fabric of every component, networking decision, and process involved in creating or hosting the product. An ISSO can get much of the support they need from their development team, however the questions they are ultimately needing to respond to come from their organization's Information System Security Manager (ISSM), supported by Security Engineering (SE) and Security Operations (SecOps).
Engaging with Security Engineering
Project teams should engage with Security Engineering as early in the process as possible. Later sections within this play will discuss things to consider in preparation for engagement, but once you are ready, reach out to Security Engineering and ask for an initial Security Engineering DevSecOps Engagement Discussion.
The email address for Security Engineering is ociso-devsecops@gsa.gov.
One of the subjects that you will want to discuss is whether or not your project is a candidate to have a security engineer embedded into your team. Having a Security Engineer on your team can be very beneficial, however it is not the only way in which Security Engineering can engage with a project to facilitate its security success.
Regular meetings between your ISSO and ISSM that involve the project's technical team leads is critical to ensure that your project is on schedule to obtain its ATO in time to meet critical release milestones. When these meetings begin, and what is to be expected of a team during these meetings can also be discussed during the initial SE Engagement discussion.
The ATO process can be a long one, even in the rare case that no issues are encountered. It is therefore critical that your team begin the process of developing their SSP as early on as possible, with as much feedback from SE and your ISSM as possible. Involve your ISSM in architectural design discussions, so that they can be clear and accurate as to how they represent the system within the SSP and other supporting documents.
Areas to Discuss with Security
Prospective FALCON Serverless Compute Model tenants should be prepared prior to engaging with Security Engineering and FCS. The following are topics that project managers should ensure their team has previously discussed in order to jump-start discussions with Security Engineering and CISS Cloud Enablement.
Infrastructure Security
Tenants within a serverless environment will inherit the facility and physical security from the Cloud Provider (in this case, AWS); but that's not where your concern should end. You may have concerns with how the infrastructure in your account is configured, and how the other security layers interact with your infrastructure and impact its security.
Serverless technologies are managed services that abstract the hardware away from the software that runs on it. The more managed a solution, the more responsibility the Cloud Provider takes for the hardware (the disc, memory, and processor that makes up the physical servers, hardware switches and routers) and foundational software (operating systems, application servers, container management systems, software routers and load balancers, etc.) your system is built upon. Using managed solutions allows you to inherit the security implementation and controls for your infrastructure, allowing you to focus more on the business-value of your system. Even though most of the security concerns will be handled by the Cloud Provider, you're not off the hook -- your decisions regarding other security and application layers can impact the security of your infrastructure. It is therefore critically important that you understand how your infrastructure is to be defined and secured, so that you can design appropriately, and work with your ISSO and Security Engineering to ensure that your system's infrastructure remains secure.
Network Security
Your environments and accounts are connected to the GSA Network and the wider world through the networking defined for your resources. Some of those resources will already be defined for you, as they will have been configured when FCS provided the Tenant Supper Account. You will still have the flexibility to configure the networking infrastructure you need to support your environments. In doing so, you will need to work with your ISSO and Security Engineering to ensure that your solutions are able to support your needs without compromising your security. Numerous resources and pre-built components will be available to make this task easier for you.
Identity and Access Control
There are several places within your project where identity and access control concerns can be introduced; chief among them are the parallel paths of authentication and authorization for your accounts and for your applications. The FALCON Serverless Compute Model provides a ready-to-install solution for providing developer access to the AWS Console along with an access control management solution for defining the roles and permissions available. There are several premade components that facilitate the implementation of industry-standard authentication processes.
Application Security
Your application needs to be secure, and securely accessed. The nature of your application will determine the security boundaries your business workflow implementation needs to cross. There are a myriad of scenarios as to what boundaries need to be crossed using what technologies, and just as many best practices as to how to implement those crossings. This will be a critical area for discussion with your ISSO and with your Security Engineering partners. Be prepared for these conversations, with a clear understanding of your application's boundaries and how you are planning to navigate them.
Data Security
Your data is the lifeblood of your application; you need to secure it while simultaneously retaining access to it. Moreover, you may have PII or PCI data within your data sets that require specialized care and handling. There are a number of concerns to consider:
- During the course of development, developers will need to access the data structures in lower environments; how can they do this in a secure and controlled manner?
- Once in Production, how can production data be safely accessed and corrected in the case of an error or incident?
- How can Production data be sanitized of sensitive data and moved into lower environments?
- How can data be secured in transit and at rest?
- How can PII or PCI data be secured in transit and at rest, while still remaining accessible by the application?
There are a number of methods for securing data at rest and in transit. Be aware of the data security requirements you will need to support your data and consider your plans for securing your data prior to discussing data security with your Security Engineering embedded teammate.
Logging and Monitoring
Logging and monitoring your application are not just useful for debugging and supporting operations, it's a security requirement. Be aware of the logging and monitoring requirements required to support your application's SSP. This will be based upon the types of data you have, your application's FISMA level, and the kinds of communications and boundaries with other systems you will have, in order to support your application's business.
Be aware that both logging (tracking execution events and errors) and monitoring (automated observation of the system's health, communication, and access in real-time) are implemented in fundamentally different manners within a Serverless environment versus your experience with On-Premise systems. This is because many of the tools used to log and monitor software systems within managed services (e.g., on Serverless platforms) are not usable with your Serverless application. This is okay, as your Cloud provider (AWS) has provided other tools and methods for logging and monitoring their Serverless platform offerings. Make yourself aware of these tools, and the differences and similarities to the tools you may be more familiar with. Discuss these tools with your embedded Security Engineering teammates, and understand how SecOps will integrate with these tools, as well as how your developers and DevOps engineers can use these tools.
The FALCON Serverless Compute Model and Security
The FALCON Serverless Compute Model supports your application's security goals through the use of previously reviewed infrastructure components and via pre-built solutions for targeted problem spaces. It provides solutions, components, and utilities that have been developed in conjunction with Security Engineering; while this does not guarantee an automatic ATO for your application, it does help to streamline the path forward. They do this by:
- Providing pre-built components and solutions to use, reducing development time.
- Providing components, foundations, and solutions that already integrate with SecOps tools and follow required practices; we implemented it for you so you don't have to.
- Providing solutions and usage patterns that have already been used by other systems.
- Security Engineering and SecOps are familiar with these implementations, increasing the likelihood that your use of them will be seen favorably.
- Increased confidence in solutions, as they are already being used by other teams.
The FALCON Serverless Compute Model is not a platform or a singular solution; instead, it is a collection of solutions and services, along with guidance and documentation, designed to work together to allow tenants to build, manage, and maintain the Serverless application systems they need to support their business.
Developer Access Solution
This solution provides developers access to the AWS console for tenant accounts through an authentication mechanism federated with the GSA ENT Active Directory (AD) via SecureAuth integration. Once authenticated, developers can assume roles defined to limit their access to only those controls needed to perform necessary duties, such as debugging and log viewing.
Tooling That Supports Serverless Application Development
There are many tools that developers need to do their work, chief among them being a good IDE that supports the technologies they are using, debugging tools, and data access capabilities. There are a number of options that are possible within the AWS Cloud ecosystem, but tools also need to be compatible with GSA security practices. The FALCON Serverless Compute Model provides a non-prescriptive toolset, along with techniques and environmental configurations suitable for developing Serverless applications within AWS while remaining fully compliant with GSA security standards and policies.
Flexible CI/CD Pipeline Implementation
The name of the game in both Cloud development and Security is consistency. FALCON Serverless Compute Model provides a templated CI/CD implementation that aims to give tenants the flexibility they need, while minimizing or eliminating the work individual teams need to do in order to be compliant with security requirements.
Software Package Repository
Development teams use third-party libraries and packages to provide common capabilities so that the team can focus on the business logic at hand, rather than recreating the same software components over and over. Often, these packages will be stored alongside the software the team is writing, requiring that, across the Enterprise, multiple teams maintain copies of the same packages; each team is therefore responsible for maintaining the end-of-life for the version of software they are using. Each team is also responsible independently for ensuring that each package they integrate into their products are compliant and approved for use. The FALCON Serverless Compute Model provides developers with a centralized repository from which they can obtain approved versions of third-party packages and libraries that have already been scanned and are guaranteed to be safe for use within GSA software solutions.
Reusable Infrastructure-as-Code
Defining a system's Infrastructure-as-Code (IaC) provides the ability to deploy that system repeatedly and reliably, as often as necessary, in all environments. The FALCON Serverless Compute Model supports and promotes IaC by providing reusable component definitions and architectural patterns defined using the AWS Cloud Development Kit (CDK). AWS CDK allows developers and architects to define their application architecture via the same languages and using the same techniques for extensibility, reusability, and encapsulation techniques that they are used to using when writing application code; it eliminates the need to learn an additional language, such as the CloudFormation Definition Language or Terraform.
The FALCON Serverless Compute Model provides pre-built infrastructure definitions implementing common features and architectural patterns that can be used to rapidly define tenant application infrastructure. Moreover, these pre-built components have been reviewed by Security Engineering and already implement key security requirements and compliance. Generic foundational classes come with security compliance implementations baked right in; more opinionated architectural implementations are built on top of the foundational classes, in order to provide the benefits of security compliance with ready-to-go architectural definitions. Tenants can build using high-level architectural patterns, or use the underlying foundational classes to build what they need.
Example Path to ATO
Use of the solutions and approaches provided by the FALCON Serverless Compute Model does not automatically ensure that a system will obtain an Authority to Operate (ATO); the components and techniques provided have been developed in conjunction with and reviewed by Security Engineering, but how they are used by a tenant ultimately determines whether or not the system satisfies the required controls. The old adage that the system is more than the sum of its parts holds here. Use of FALCON Serverless Compute Model components can, however, facilitate development and help to expedite review of a system's System Security Plan (SSP -- one of the primary documents used to determine system suitability for ATO).
So then how does using FALCON Serverless Compute Model resources and approaches benefit an application development team? The following short scenario illustrates some of the highlights an application could take to reach ATO using FALCON Serverless Compute Model approaches, components, and solutions.
Lauren is tasked with the creation of an application to meet a set of business needs. She works with her development team to determine at a high level what her application will look like, and what its boundaries are. The application will have a user interface that allows users within GSA to upload files. The files contain information on travel receipts; they will be processed to determine what funds must be reimbursed to employees. The data also potentially contains sensitive PII information, in the form of employee addresses and travel information. A daily report will need to be produced showing the information that has been processed for the day; this information will have employee names associated with reimbursement amounts.
Lauren follows the process laid out for engaging Security Engineering, and a security engineer is assigned to be embedded into her development team. The team works with their ISSO and the security engineer to confirm their security boundaries and review the potential PII information. Ed, the security engineer, confirms that the employee addresses and travel information is considered PII, but the data in the daily report are not considered PII. Ed works with Terrence, the team's ISSO, to start the SSP for the project. The development team has determined that the best solution for the project would be to develop the application using Serverless technology, as this will be very cost-effective.
Lauren then engages FCS to procure a Tenant Super Environment. Through discussions with the team and with input from the FCS Cloud Enablement team, they decide on a three-account Tenant Super Environment: one Management account, one Lower Tier account, and one Production account. The development team feels confident that they can satisfy all of their SDLC needs within a single account to lower overhead costs and maintenance efforts, given the small size of the team. FIS provisions their account, which includes initial networking configuration to allow their Lower Tier and Production accounts to be accessible from the GSA Network only.

The development team decides to separate the application into four stacks:
- File ingestion, including an API Gateway used by the UI
- File and Data Processing
- Report Generation and Auditing
- Data Stores, including an S3 Bucket containing the UI as an Angular application
The development team found that, by breaking the application into independent deployable components (or Stacks) concerned with a specific functionality, they would be able to decouple the development, deployment, and maintenance of each application part. They could have decoupled things further, allowing each Lambda Function to be deployable independently, but the Lambdas are tightly coupled with the infrastructure immediately surrounding them (and in the case of Stacks B and C, tightly coupled with other lambda functions), so doing so would increase the complexity of their deployment processes, without providing any real additional benefit.
Each stack is deployed independently from one another, and is defined using AWS CDK Infrastructure-as-Code. The Stack definitions were coded by Lauren's application team, but use FALCON Serverless Compute Model Foundational Application Infrastructure classes as their base. Each stack defines a discrete functional area, and has its own repository, containing both the IaC and the Python code written for the Lambda Function handlers.
The team elected to take advantage of the FALCON Serverless Compute Model CI/CD Pipeline implementation, as it provided a straightforward process implementation that was flexible enough to meet their needs. They could have opted to create their own pipeline implementation -- they could even have used some of the foundational components of the FALCON Serverless Compute Model pipeline, if they needed to -- but the FALCON Serverless Compute Model implementation was suitable to their purposes. The pipeline implementation was obtained from the FALCON Serverless Compute Model Reusable Infrastructure Repository by Macey, who has special access within the accounts that make up the project's Tenant Super Environment. She was able to pull the pipeline implementation and configure it in minutes. Macey also installed the FALCON Serverless Compute Model solution for Federated Access for Developers. Both solutions were installed into the project's Management Account, although each has further components that had to also be installed in the Lower Tier and Production accounts. With the Pipeline implementation in place, developers are able to check in their code changes to the repo and have the Pipeline then build and deploy their code. The Federated Access for Developers solution allows Macey to tailor access to exactly what features within the AWS Console the development team needs to do their work, following the best-practice principle of least privilege. She is even able to change this on-the-fly if the team has too much or not enough access.
In order to have the deployed application instances be accessible and communicate with other GSA systems and resources, Macey worked with Ed and Terrence to identify what kinds of infrastructure would need to be in place to allow deployed application instances to communicate with GSA. Terrence then put in the necessary ServiceNow tickets to have the GSA Network firewall and DNS changes made. Macey implemented the necessary infrastructure within a Networking Stack, taking advantage of the pre-built architectural pattern implementations provided by the FALCON Serverless Compute Model Reusable Infrastructure Repository.
In this example, there are several places where FALCON Serverless Compute Model resources and approaches directly support the work of developing the project's SSP.
- Lauren followed the guidelines provided by Security Engineering to have Ed embedded within her team. Because Ed is directly involved with the design and implementation of the product, he is able to speak up when he sees decisions being made that would negatively impact security. The intimate knowledge he gains during development allows him to provide advice that can drive the team towards security best practices from the outset, when this advice provides the greatest positive impact to cost and schedule.
- The development team took advantage of the FALCON Serverless Compute Model Foundational Application Infrastructure classes; these classes come with GSA standard infrastructure tagging and logging practices baked into their foundations. Higher level constructs implement more opinionated best practices for common architectural patterns. These components have already been reviewed by Security Engineering, and their documentation discusses the security controls that the implementation addresses. This information is directly applicable to Terrence's efforts for developing the SSP. This has further benefits because, as new features are added over time, the reusable components used continue to inform the updates that need to be made to the SSP. Remember, the SSP needs to continually evolve as the application evolves.
- Macey used two pre-built FALCON Serverless Compute Model solutions -- one for controlling developer access to the AWS Console, and another to provide a CI/CD Pipeline. Both of these solutions, especially when used together, implement twin best practice principles -- infrastructure is only instantiated from IaC via a repeatable process, and limited access should be provided to resources in as limited a fashion as possible. Moreover, these implementations have already been reviewed and approved for use by Security Engineering. As a result, they directly answer several sections within the SSP; Terrence can directly use content previously written within SSPs written for other projects that used these solutions to address the relevant controls in his project. Ed is able to assist him in obtaining the relevant content.
- The infrastructure Macey implemented to connect the application instances to the GSA Network also used pre-reviewed architectural components that implement common patterns and best practices. The documentation of these components also describe how they can be used to answer SSP controls.
FALCON Serverless Compute Model provides a collection of solutions that tenants are able to connect together to more rapidly construct solutions that are right for them. Brick by brick, these components reduce the time necessary, not only to build the application itself, but to define the documentation necessary to obtain ATO.
Data Migration and Moving to the Cloud
Data is critical to our applications. For projects that will be replacing older systems, integration of the legacy data into the new system can present unique security challenges. Data will need to be protected in transit and at rest, and data fields containing sensitive information (such as PII or PCI data) may require additional encryption so as to not be readable by data management tools.
Oftentimes, data stored within legacy systems may not have had as stringent data security requirements. The transmission of legacy data into a Cloud-native system may require that sensitive data be segregated and/or encrypted prior to reaching its new destination. In order to programmatically protect this data, it may have to be transmitted to an intermediate holding location prior to its final transformation. Project plans should clearly indicate what kinds of data are to be stored where, and include detailed descriptions of how legacy data will be safely moved into.
Even if a legacy system encrypted sensitive data, complications may arise from the need to decrypt the data with the legacy system's key, and re-encrypt the data with the Cloud-native system's key. It is worth the effort to have worked this lock-step hand-off out ahead of time, as a detailed understanding of these processes will be needed early on in discussions with Security Engineering.
For large systems or modernization efforts that will occur over multiple phases, there may be a need to synchronize data between the legacy system(s) and the Cloud-native system(s). There are many techniques possible for implementing data synchronization, but each has its own strengths and weaknesses. It is advisable to minimize the amount of data synchronization needed; this can often be achieved through careful consideration of what components should be migrated or modernized first. Eliminate bidirectional data synchronization if at all possible, and try to limit other forms of data synchronization. Expose data via APIs as much as possible; APIs constructed for data synchronization between legacy and Cloud-native systems can later be refactored and retasked for consumption by other consumer systems. See DMSII to Cloud Continuous Data Replication: Straight Shot to learn more.
In all of these scenarios, there may be points in time when, for a short period of time, a less-than-optimal solution may be required to transfer stewardship over a dataset from one system to another. How acceptable a solution is often depends upon the nature of the data, the risks involved in the solution, and the amount of time the undesirable situation will persist. These decisions can only be made in partnership with Security Engineering. These situations will naturally arise over the course of an application's development. Discussion and careful deliberation are often the best and only courses of action for resolving issues. The sooner the complexities in a data transformation effort are known, the better you can identify the risks involved, and sooner you can work on mitigating those risks.
Ultimately, Security Engineering is here to help get systems implemented safely. Development and Architecture teams need to come to the table with issues identified, and be prepared to work with Security Engineering to find appropriate solutions.