System Architecture
FALCON Architecture (a bird’s eye view)
FALCON architecture is an event-driven, loosely-coupled, microservices architecture that features asynchronous behavior and loosely coupled structures. This is implemented by an API for each business function and private subnets. Processing for each business function is triggered by events fired by the API for that business function. ALL FALCON APIs are private, inward facing, and all traffic to or from FALCON is from GSA Networks. FALCON has no public internet access.

Business function resources (compute, data) are aligned in a vertical under each API. The No-SQL database service DynamoDB is used for almost all data, and MySQL is used where appropriate. AWS Simple Storage Service (S3) is used for non-database at-rest file storage. Permissions, encryption keys, and file access are segregated by business vertical. Events trigger processing. Events are loosely coupled through AWS EventBridge, AWS Simple Queue Service (SQS), and AWS Simple Notification Service (SNS). The AWS CloudWatch service tracks AWS Cloud events including API and application events above.
FALCON is designed, implemented and operated using DevSecOps. GSA Security staff are directly involved in all stages, daily scrums, deployments and operations. FALCON microservice architecture consists of AWS Lambda handlers providing compute capacity for code execution in an asynchronous capacity. Each Supply Chain business function uses dedicated API endpoints to broker web service requests either from partner applications like OMS, or the FALCON Portal UI. All API endpoints are secured through custom headers (X-API-KEY) that identify the calling entity and without this key endpoint access is not possible. Additionally, all traffic to and from each API endpoint is secured through SSL/TLS traffic protocols.
Code review and scanning tools are described below.
FALCON system access occurs through API calls. Each API has a security group, AWS WAF, and is reached through an AWS PRivateLink Network Endpoint. The AWS PrivateLink Network Endpoint is assigned to a private subnet, requiring API access to flow through a single path to and from the API. A path through the API security group and private subnet never traverses a public network connection. This provides layered protection for all API access. Additionally, AWS Virtual Private Gateways (VGW) are used to provide segmentation boundaries between inter and intra network traffic.
Lastly, GSA enterprise employs multiple firewalls and firewall rule sets to limit traffic flow within the enterprise. After traffic passes through GSA firewalls, FALCON scans that traffic and evaluates rules within its own collection of AWS Security Groups (SG). SGs add another layer of access control for data flow. SGs allow specific ports and by default an implicit deny is configured. This means that an explicit allow must be configured and is required for network traffic flow to occur within the FALCON VPCaaS environment.
Read more technical details below about the various components of the FALCON architecture.
AWS Service Name | Brief Description of Use(s) |
---|---|
Lambda | Compute resources for code execution |
DynamoDB | NoSQL Data Store |
Aurora MySQL | Relational Data Store |
SQS | Message Queue service that supports SFTP transactions and file processing |
EventBridge | StepFunction/Event coordinator that supports SFTP Functions and batch processing |
S3 | File store for data archival, Data Exchange Processes (encryption of data), SFTP, Log File repository from CloudWatch/CloudTrail, FALCON UI static file hosting |
Certificate Manager | Certificate Manager for SSL, PKI and other certificate needs |
Key Management Service | Issues one-time-use keys for connectivity to APIs Gateways and AWS Environments through CLI |
Transfer Family | Support SFTP file receipt from FALCON application partners |
Step Functions | Step wise execution of business processes to achieve functional goal based on specific rules for each business process. |
SNS | Messaging service uses as a part of event notification systems |
Cognito | User Store to map roles for application authorization |
CloudWatch | Logging Service that captures all event logs within FALCON system |
CloudTrail | Auditing Service that allows log review/audit of all FALCON Systems |
Virtual Gateway | Demarcation between “GSA Network” and FALCON VPCaaS environments |
Web Application Firewall | Customizable firewall to secure FALCON UI/Web application, network connection access, user authorization to data access |
API Gateway | Entry/Exit point to control network access and security to Lambda APIs |
VPC Endpoints | Entry/Exit point to control network access and security to environment services like S3 |
ECS Fargate | Container As a Service “service” that hosts containers as required for FALCON environments |
X-Ray | Lambda services investigation tool to debug execution events within Lambda code execution |
CDK | Development toolkit to that supports Infrastructure as Code development and environment configuration, and/or deployments |
AWS GLUE | A serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. |
Athena SQL | Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage. |
Data Migration Service | Supports the Data Plane and migration of data from legacy Mainframe databases to AWS Aurora MySQL or DynamoDB application databases |
Network/Application Load Balancer | Provides load balancing and flow control to FALCON cloud services like the FALCON Portal Web Application |
Route53 | Hosts a private DNS zone to allow for packet routing within the FALCON VPCaaS |
VPC | Virtual Private Cloud is the logical container where all FALCON application entities exist for a specific purpose (Dev/Test/Mgmt/UAT/PROD) |
IAM | Identity Access Management provides roles and policies that secure access to application components and services within FALCON VPCs. |
CloudFormation | Models and creates AWS resources “codified” in AWS CDK. |
Secrets Manager | Used to securely encrypt, store, and retrieve credentials for databases and other services. |
FALCON uses the below service groups inside AWS VPC in four different environments: (Dev (not in ATO boundary) , Test (not in ATO boundary), UAT, Prod)
- VPC with endpoints are used to keep traffic inside VPC for all services that support PrivateLink
- Lambda with API Gateway with an NLB is used to host the certificates to be able to call the API Gateway endpoints with a custom name in Route53
- Route53 DNS resolvers are used to send GSA internal dns queries to GSA authoritative DNS servers.
- S3 storage hosts files generated by lambdas that get sent out to peers like OMS and FSS-19 via SFTP using a state machine to coordinate handshaking and sending notifications via SNS.
- DynamoDB is used to host configurations for the specific SFTP endpoints like hostnames/usernames and remote locations to send store SFTP files
- DynamoDB is also the final destination for the data imported from on-prem SQL Server via AWS DMS as almost all application tables are DynamoDB tables. This is how data synchronization is achieved between on-prem and cloud database tables wherever required.
- SNS topics are used to send emails for specific events like success or failures, these topics are subscribed by whichever user needs to be informed about the success or failure of individual SFTP operations.
- Access to all resources are managed via IAM, except the below:
- dataplane Aurora MySQL :
- Access to AWS account is controlled by IAM users and roles
- Access to AWS resources including Aurora database instance is via IAM users and roles
- Access to Aurora database functions and data is by user/password with permissions assigned within Aurora MySQL.
- Jenkins via Cognito- Access to AWS accounts is via IAM users.
- OKTA RBAC is assessed as a more secure access control where all access to resources is controlled by
- Successful login through OKTA backed by SAML and AD and ENT
- This ensures
- MFA controlled by GSA Networks
- IAM MFA is direct to AWS and unknown to GSA
- Active Directory controls are applied by GSA Networks
- SAML and ENT controls are applied by GSA Networks
- None of these is required to login directly to the AWS Console or use AWS credentials directly to AWS for programmatic access
- IAM Roles are used for AWS access control
- IAM Roles are assigned during the OKTA login process
- With a fixed expiration or useage timeout
- OKTA assignment of limited term IAM roles to the user
- Offers additional layer of access control, logging, and tracking
- As access is transitioned to OKTA RBAC, IAM users will be decommissioned leaving only the essential two or three system administrators with IAM provisioned access control.
- The only non-lambda compute resources are the below:
- [non-ephemeral] Jenkins leader running on ECS fargate - docker hub image: jenkins/jenkins
- [ephemeral] Jenkins workers running in ECS fargate on-demand with docker hub image: Jenkins/agent
- Access to NSN Aurora DB is via RDS Proxy with IAM access via secrets manager
- AWS Transfer family managed SFTP is used to receive files from peers (OMS), access is managed via ssh keys.
- Access to APIs is managed via api keys. IAM with Cognito controls. AWS Code Artifact in dev account is currently being used to host the npm packages built by the CDK constructs repository. All environments use the code artifact npm package repository in the AWS dev account
- All traffic in and out of VPC is routed through the VGW pre-provisioned with the AWS VPCaaS accounts by FCS.
- There are no internet gateways in the Falcon AWS accounts.
- Access to databases subnets is only allowed from lambda subnets and specific VDI subnets (FSS19 Admin Desktop)
- Access to API application endpoints is currently only allowed from the VDI environment and is secured by integrating IAM with Cognito for token-based role based access to APIs and UIs and decryption microservice lambda.
- Jenkins jobs in dev accounts are triggered automatically from GitHub web hooks payloads.

Asset Type | Description of Function or Service Provided |
---|---|
AWS Certificate Manager | Easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and internal connected resources. |
Cognito | Add user sign-up, sign-in, and access control to web and mobile apps quickly and easily. |
AWS CDK | An open-source software development framework to define the cloud application resources using familiar programming languages. |
AWS Identity and Access Management (IAM) | Enables to manage access to AWS services and resources securely. Using IAM, can create and manage AWS users and groups, and use permissions to allow and deny access to AWS resources. |
API Gateway | Provides application program interface capabilities that map incoming network requests such as HTTPS strings to processing functionality such as application processing and compute implemented as Lambda functions. |
DynamoDB | Key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-region, multi-active, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications |
Amazon S3 Bucket | An object storage to store and protect applications data availability, security, and performance. |
S3 data lake | S3 bucket dedicated, secured to store large, specific subject data. |
EventBridge | EventBridge for event handling including state machines with loops. |
AWS Transfer for SFTP | SFTP will be performed to receive and send files using S3 buckets with restrictive policies. AWS Transfer is the preferred mechanism for SFTP. |
Lambda | The serverless compute runs the code virtually for the applications or the backend. |
App ECS on Fargate | AWS Fargate is a serverless compute engine for containers that works with both ECS and EKS. Fargate removes the need to provision and manage servers, it runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design. |
Application Load Balancer | The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, containers, and IP addresses in multiple Availability Zones. |
Aurora Serverless MySQL | Provides relational database services in the cloud without managing capacity, access endpoints, and HA directly. Aurora Serverless MySQL automatically starts up, shuts down, and scales capacity up or down based on the application's needs. |
VPC Endpoints | Endpoints will be used for traffic between AWS Services and Lambda functions as network interface endpoints (AWS PrivateLink) and gateway endpoints. Endpoints will use a standard nomenclature for simplicity. |
Cloudwatch | Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. You can use CloudWatch to detect anomalous behavior in your environments, set alarms, visualize logs and metrics side by side, take automated actions, troubleshoot issues, and discover insights. |
CloudTrail | AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. In addition, you can use CloudTrail to detect unusual activity in your AWS accounts. These capabilities help simplify operational analysis and troubleshooting. |
AWS Route 53 | This is the global network routing service internal to the AWS Cloud. This service enables connectivity between external on-premises or external internal interfaces and the AWS Cloud. Route 53 connects network traffic to the VGW and from the VGW to the subnets within the VPC. |
Elastic File System | EFS provides file-based storage and is used only by the CI/CD Pipeline as Jenkins requires some external file storage. |
Step Function | This service enables the linkage of Lambda invocations or service calls so that a series of steps may be executed in an automated, repeatable way. |
Virtual Gateway | This service provides the gateway between the AWS Virtual Private Cloud (VPC) and internal networks and the GSA Network. AWS VPC routing tables specify the VGW for external traffic. |
Key Management System | Provides the key security module, cryptographic key material storage, encryption and decryption compute and storage with related back-end capabilities required to simplify and generalize data encryption at rest. This service is used by other AWS services such as S3, DynamoDB, EFS, Aurora MySQL, API Gateway, Secrets Manager, and Lambda functions. |
Secrets Manager | Provides encrypted storage for confidential information such as database connection strings, application parameters, and other sensitive items. |
Database Migration Service | AWS Database Migration Service (AWS DMS) helps migrate databases to AWS quickly and securely. |
Glue | AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. |
WAF | WAF is a web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources. |
jFrog | GSA artifact repository. External artifacts are stored here and scanned by GSA SecOps. Such as programming tools like VS Code extensions. |
The following table lists the principal software components (e.g., operating system, database, web software, etc.) for FALCON.
Software Component/Name | Function | Virtual (Yes / No) |
---|---|---|
Jenkins | CI/CD deployment pipelines | Yes (ECS Fargate) |
Python | Encryption Lambdas | Yes |
NodeJs | Infrastructure as Code, application code | Yes |
Java | pricing-support and contract-information apps | Yes |
Angular | supply chain UI app | Yes |
Prisma | scanning IaC/CDK code | Yes |
Snyk | Scanning Node.JS, Java, and Angular repositories | Yes |
The primary design is that a one-way flow of data from the existing Mainframe systems occurs through DMS, into a cloud-based data plane, and from there to the application data tables. This is the data migration and synchronization process.


The FALCON data flows are:
- Mainframe -> Data Exchange MS SQL Server -> DMS -> AWS Cloud -> S3 bucket for encryption if sensitive -> Second S3 bucket for data loading -> DMS -> Aurora MySQL “data plane” -> Application Tables in DynamoDB.
- SFTP data receipt -> SFTP connection -> File received over SFTP -> File processed in Lambda -> sensitive data encrypted if any exists -> data stored in DynamoDB
- SFTP data sharing -> Data pulled from DynamoDB by Lambda triggered by cron or event -> data formatted in Lambda for partner processing, usually flat file format -> decrypted if any exists ->File placed in encrypted S3 bucket -> data transmitted to partner over SFTP
Sensitive data is decrypted on retrieval from application tables by FALCON application lambdas and prior to response to the API that invoked the application lambda. Decryption is performed by one lambda, a microservice lambda, written in python. The decryption microservice lambda places an API call to the KMS API by calling a function in the AWS Encryption SDK that the python module includes. The encrypted text is passed to the function and plain text is returned.
Encryption and decryption occur during Lambda data access, and is performed by a microservice lambda that utilizes the AWS Encryption SDK.
Plain text sensitive data exists only in memory except
- during initial entry to the AWS Cloud
- during export to Falcon peer systems such as OMS
- a CloudTrail data trail for data events
- a CloudTrail security trail in place for security
- a CloudTrail management trail in place for management events
- S3 bucket logging in place to log bucket events
- CloudWatch logging for other events
The operational data flow is:
AWS CloudTrail, CloudWatch Services capture log files.
The Data Flow Diagram (DFD) below maps out the flow of information traveling within an information system and between information systems.
FALCON handles contract and acquisition data. This includes sensitive data that is encrypted on a table by table, field by field basis before it is loaded into the AWS Cloud MySQL database from which all application tables are loaded. Sensitive data is encrypted immediately upon entry to the cloud and prior to storage in the cloud. The encryption data keys are stored with the field data in encrypted form and the customer master key is stored in the AWS KMS service and accessed by KMS API call with a specific KMS key resource identifier (arn). Data keys are never stored in plain text and exist as plain text in memory within the KMS service call operation only long enough to encrypt or decrypt the field data. Within a given FALCON table one CMK and a large number of data keys will be in use. Each FALCON table has its own CMK. A separate CMK is used to encrypt incoming sensitive data from each mainframe table (application encryption, field level encryption).
Keys are not available to operators, developers, or applications. A microservice lambda is used to decrypt data, and a microservice lambda is used to encrypt data. KMS keys used for sensitive data encryption are only available to these lambdas. The decrypt lambda assumes a role passed from the application lambda to authorize use of a specific KMS CMK. The application lambda receives the role through the API and the API receives the role from the Role Based Access Control (RBAC) that gets it from the Cognito pool containing the requestor. The identity of the KMS CMK used to encrypt sensitive data is known only to these functions. The cross reference of mainframe tables by arn(s) are stored and encrypted in AWS Secrets Manager. Sensitive data is encrypted at the field level, external and prior to at-rest database storage. Sensitive encrypted data within the application database cannot be decrypted by the database platform as it uses external KMS CMKs to which the database platform has no access.

There are no mechanisms for anyone including Amazon service operators on-site within AWS facilities to export or view the key material in plaintext.
One Aurora Serverless MySQL Database Cluster is used for data migration and synchronization. This is where DMS loads data into.
A separate Aurora Serverless MySQL Database Cluster is used for one business line function's data that was not well served by DynamoDB. This is an application database used by NSN-Class-Group.
All other application data is stored in DynamoDB. This is the majority of data.
Aurora MySQL and DynamoDB are each encrypted with KMS keys at the database level.
Operational data, specifically log files and Lambda source code in zip file format, is stored in encrypted S3 buckets. FALCON S3 buckets are SSE encrypted with KMS keys accessible only to the S3 managed service. FALCON S3 buckets are not publicly accessible. FALCON S3 buckets are reached through PrivateLink endpoints that require TLS connections. Data in FALCON S3 buckets are encrypted at rest within the bucket and in transit to and from the bucket.
Sensitive data encryption is enforced at the field level by the encryption lambda microservice triggered by creation of an object in the first S3 bucket by DMS. This point is prior to storage in the AWS Aurora Serverless MySQL Database Cluster or “Data Plane”. AES 256 encryption using the Python AWS Encryption SDK and AWS KMS API is executed by the encryption lambda microservice. KMS key identifiers are passed to the KMS API during the encryption request and the encryption keys are never available to the lambda microservice in plain text form. Key identifiers are associated with the table name and field name of the data to be encrypted. Still, the same CMK is currently used for all sensitive fields within a table. The design will support encryption using different CMKs on a field-by-field basis if required by GSA and if the processing time requirement becomes acceptable from a business perspective. Different data keys are used during different encryption cycles with the end result that hundreds or thousands of data keys exist in a form encrypted by the one CMK for a given sensitive data field in a table.

S3 buckets are also used for aggregation and querying of large groups of data, such as Cloudtrail logs that are then queried by AWS Athena SQL or S3 Select, and this type of use is called a “S3 data lake” by AWS. These buckets are encrypted with KMS keys, and access is restricted by IAM. The key materials are stored in AWS KMS cloud hardware security (CloudHSM) and are not directly accessible.
No bastion hosts are used. No external access is provided except through business function line API requests. API requests are accepted from user interfaces running on client machines connected to the GSA network or from peer systems. User access to FALCON user interfaces requires MFA based access to the GSA network. Peer system access to FALCON functionality requires certificate and header key.
For security reasons, FALCON also uses a virus scanning process that scans all incoming files that enter S3 and all outgoing files that are sent from FALCON to other peer groups and systems.
DevOps/DevSecOps Management
The following practices were implemented to ensure that both DevOps and DevSecOps needs were met as part of the development of the FALCON architecture.
FALCON uses SAFe(a) Agile techniques that feature Program Increments (PIs) containing Sprints during which development work is performed. During each Sprint developers work on application code as serverless lambda functions. These are designed as stand-alone microservice code modules. As work proceeds, developers save their changes into a development branch on the main business function line in GitHub. When a microservice code module is ready for unit testing it is merged into the main branch and this triggers deployment through the CI/CD pipeline (Jenkins) to the Dev VPC environment.
Each AWS VPC provided by FCS as VPCaaS has a separate CI/CD pipeline drawing from the single GSA provided GitHub code repository. Different code sets may be deployed to different environments.
All FALCON application access is through an API deployed in the AWS API Gateway managed service. API Gateway endpoints are the Lambda application microservice code module(s). Deployment of most Lambda modules is tightly integrated with API configuration specifics including Lambda versions and aliases. Below is the workflow steps process:
- A single Lambda code module, or multiple modules may be deployed through CI/CD for unit testing
- Lambda versions and aliases are used to separate development and final, or blue and green versions as two different stages on an API Gateway.
- At the end of each Sprint the Sprint Release is deployed through the CI/CD pipeline for demonstration to Product Owners (PO).
- All the Lambda code modules in the business line repository specific to the Sprint are deployed for PO demonstration.
- At the end of each PI the Agile Release Train (ART) Release is offered for UAT.
- End of PI, about once every three months, is a natural point for delivery of changes as a release to the UAT environment.
- All the Lambda code modules in the business line repository specific to the PI are deployed for UAT.
- The delivery of each Set or Group of business line functions as a combined deployment.
- Production deployments occur as directed by GSA, usually upon completion of a group of PIs.
- Each GitHub repository used in the Set or Group of business line functions is deployed for testing.
- Separate GitHub repositories are used for each of the twelve business functions.
- As required sub-repos can be added to main business function repos.
- Current design is intended to limit this structure and keep complexity of repo architecture minimal.
- Long running branches with fixed duration are used.
- Duration is fixed to the duration of the PI.
- One master branch is used. This is deployed to each environment with tagging controlling the exact commit to deploy to each VPC.
- Developers may create a branch from the master and perform work in that branch until it is ready for deployment at which time the developer merges the branch back into the master.
The AWS CDK (AWS CDK Site) is used to automate the creation and changes made to the AWS account/environments dev-test-uat-prod. CDK code is writing in Typescript committed to enterprise GitHub, GitHub webhook triggers the Jenkins job to automatically deploy the changes to the infrastructure in the development environment for the other environments the admin/users have to go to Jenkins for that environment [jenkins.fss19-${SHORT_ENV}.fcs.gsa.gov/] and trigger the deploy manually for the infrastructure that was changed in code. The environments are identical by design and are created and maintained by one set of IaC code. IaC is subject to threat scanning, meaning that Falcon infrastructure is scanned for threats prior to being deployed or instantiated.
Jenkins is implemented as an ECS fargate task that is deployed with EFS as storage, if/when Jenkins is restarted/redeployed/reconfigured the jobs history is not impacted. Jenkins depends on a Jenkinsfile deployed on each code repo to specify the build steps. The Jenkinsfile includes and is not limited to deploying required tooling like NodeJs, AWS CLI and builds like npm and maven builds and deploy like using AWS CLI to deploy artifacts to S3 steps. Jenkins relies on Github enterprise webhooks to trigger deployments automatically only to the dev environment based on the push to master branch, other environments have to be deployed manually via Jenkins UI.

FALCON CI/CD pipeline and developer platform use open-source tools to scan code for vulnerabilities, errors, and bugs. Snyk is a platform for securing code and dependencies integrated into Jenkins. Snyk will scan Node.JS, Java, and Angular repositories to ensure compliance with known Information Assurance Vulnerability Alerts (IAVA) and secure coding best practices.
FALCON uses Prisma cloud to scan the VPCaaS environment for known security vulnerabilities. These scans happen monthly and at the completion of each scanning cycle a vulnerability report is compiled and delivered to FALCON system owners.
This is not applicable to FALCON because FALCON creates and maintains no images.
The AWS Secret Manager and AWS Key Management System are used for secret and key management. Access to Secrets or Keys is controlled through roles and service specific IDs (arns). For example, each KMS CMK has its own unique key policy. This policy specifies the principals allowed to use the key, such as AWS users and AWS IAM roles. The policy also specifies the actions allowed. All traffic is secured via https/TLS port and protocol, SFTP, or SSH. The Key or secret details are not stored in any database but are provided at runtime as an environment variable and then forgotten. In Jenkins, credentials and secrets are handled by the Jenkins Secure Environment variables.
There are two types of artifacts,
- the API Lambda handlers' artifacts that are stored on S3 (and protected by IAM access to S3), and
- Jenkins pipeline artifacts that are created as part of Jenkins pipeline (job logs).
Code developed as part of the overall process is maintained within long running branches correlating to the current Program Increment (PI). When changes are pushed to the “Main'' branch within the DEV VPCaaS environment/pipeline, code is compiled and deployed automatically. Automatic deployment of FALCON code only occurs in the DEV VPCaaS environment.
When a dev cycle or PI completes, deployment packages are labeled as a specific release and represented with a hash value generated by GitHub. Deployment of code packages to higher environments (Test, UAT, PROD) is a manual process initiated by a human. Deployment events use the label and hash combination to ensure the correct package (and all components) are deployed to the target environment.
Deployment to the TEST VPCaas environment requires administrative approval and action by a system administrator. Deployment to the staging environment, the UAT VPCaaS, requires administrative and GSA staff approval. Deployment to the Production VPCaaS environment requires GSA IT Security approval, GSA staff approval, administrative approval, and action by a system administrator. Production deployments are tracked in Jira.
Serverless Design
The following sections cover the technologies within the architecture that enabled FALCON to have a serverless design.
Access to AWS accounts is managed via IAM groups, IAM groups and policy as well as the matching roles are all currently being created via CDK code. IAM roles are assumed after a user logs in via OKTA federated Identity backed by SecureAuth IdP which ensures MFA requirement and allows access only via LDAP/AD users. Roles assumed through Okta are time limited.
Workloads are separated by IAM permissions and access, subnets, security groups, and IP routing tables. In general, IP routing tables are unique by subnet. Security groups are unique by Lambda function, endpoints, API, or subnet. Subnets are unique within each VPC as a Service. IAM roles and permissions are unique by service and resource name within service (ARN) such as a KMS CMK arn.

There are four VPCaaS in FALCON - Dev, Test, Mgmt (UAT), and Production. These do not overlap or share subnets, security groups, IAM roles or permissions, arns, or routing tables. Each VPCaaS offers at least three Availability Zones or AZs. Each AZ is a separate data center within AWS's cloud infrastructure. The same subnet exists in a minimum of two and usually three different AZs to provide high availability. In the event of a failure at any one AWS's datacenters, the redundant subnet will instantaneously be used. This does not require switch-over, routing changes, or an administrative action.
IAM permissions are checked prior to the execution of an action. IAM permissions are the primary separator of workloads. IAM permissions apply to both user actions and system actions.
Each VPC uses a unique IP4 CIDR block of addresses and each subnet within a VPC uses a unique, contiguous IP4 CIDR block within the VPC address range. Security groups allow access to subnets by source IP address or range. IAM permissions can use IP addresses but seldom do.
Specific technical functional groups are isolated by subnet. There is a subnet devoted to APIs and Lambdas for example. Subnets also exist for MySQL, DNS Forwarders, Data Migration work, CI/CD, and other process verticals. Each subnet is designed to be as small as possible so that the subnet IP4 address range changes in the GSA Network firewall are as small as possible
Each of the four AWS accounts provisioned for Falcon,Dev, Test, UAT, Prod, has one VPCaaS environment. Within each VPC, there are subnets dedicated to a specific task such as SFTP, APIs, DNS Forwarders, Data Plane, Data Migrations and others with the goal is to make each small enough so that opening pinholes on the GSA Network firewall would be easier. More details below:
- Each FALCON environment is a separate AWS account with logical separation for all resources and access accounts.
- Each FALCON environment has a dedicated CI/CD pipeline and infrastructure to maintain seclusion between environments even for infrastructure and application code deployment activities.
- Source code is stored in the GSA Github repository. Each FALCON environment uses a separate, stand-alone connection to the GSA Github repository.
- Infrastructure as code (IaaC,CDK) is stored in the GSA Github repository.
- Either application source code or CDK code or both may be promoted and deployed.
- Promotion of code in dedicated branches is the only "Main" branch items that go to PROD and code is required to be promoted through each branch dedicated to individual environments.
- Branches are created as needed to track new builds, features, defects, and other changes required for application or infrastructure functionality. All code deploy events are processed through code quality, security, etc. scanning. In this way infrastructure is subject to security scanning before it is built or instantiated.
- Security infrastructure to include API keys, user permission, User Story infrastructure (Secrets and Key Managers), API endpoints (private DNS zones), etc. is logically separated within each environment respectively.
The architecture is designed to leverage the high-availability design inherent in AWS Cloud Services. The serverless architecture contributes to this high availability. Services such as Lambdas, DynamoDB, Aurora MySQL, S3, and many others are implemented by AWS in such a way that they are highly available. AWS SLAs are provided that contractually support high availability to all AWS customers.
Within each of the VPCaaS, FALCON uses redundant subnets for each technical functional group. The redundant subnet(s) are in a different Availability Zone (mainframe building) so that the failure of any one mainframe location does not affect FALCON availability.
The network's high availability strategies are:
- Lambdas – HA by design and implemented using at least three private subnets
- DynamoDB – HA included as part of the service
- Aurora MySQL – HA included as part of the service
- Static files use S3 storage – HA included as part of the service
- Privatelink Endpoints - HA included as part of the service
- KMS - HA included as part of the service
- Data Migration Service - HA as part of the service and implemented using at least three subnets
- VGW governs ingress and regress from environment – HA included as part of the service
- API Gateways - HA included as part of the service
- Virtual Private Gateways - HA included as part of the service
- Infrastructure through CDK/IaC is deployed across 3 AZs within the US East Region (FedRAMP'ed Regions and AZs)
- FALCON Portal sits behind Network/Application Load Balancers - HA included as part of the service
- Using AWS Web Application Firewall Service to further mitigate exploits and DNoS attacks
Private subnets in all instances and all endpoints are private that will not traverse the public internet.