Skip to main content

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Share your experience with the FAS IT-Playbook by taking this brief survey

IDE for Serverless

Integrated Development Environments are so intertwined with how software is developed that it is easy to forget all of the complexity that surrounds them. They can be seen as a kind of short-hand for the overall development processes, as they act as the nexus of development work, connecting to numerous tools, and are our gateway into nearly every aspect of the development process. We often only realize their importance when they are not working well. To that end, a good developer experience is essential to ensure that GSA's developer community successfully engages with cloud-based technologies for the fabrication of GSA software products.

Development in a Cloud Environment

In order to understand the changes to how development processes may be implemented within the Cloud, it may help to break down how IDEs are used, in order to remap familiar processes onto new kinds of tools. Traditionally, software development takes place on a developer's physical machine via Integrated Development Environment software, often supplemented by other tools and products. There are a few main capabilities that a software developer needs their toolset to satisfy:

Classically, an IDE is able to provide all of these capabilities. A developer needs to operate their own local RTE to ensure that the code that is being authored will work, not just for unit testing, but as a function of iterative development practice. The code to be executed in this manner is typically not checked in prior to execution, as it is not yet ready for sharing with the rest of the team. It therefore requires that the developer have a means to deploy (and potentially to build, as applicable) the code to their local RTE, in order that they can test, verify, and debug the code as part of their iterative practice. This is so much the bread-and-butter of development that we often take this process and the tools that come with it for granted.

The Serverless Difference

When operating in the Cloud using any form of managed service, the fundamental difference when compared to the traditional experience is where the local RTE operates. In general with Cloud platforms, and particularly when working with Function-as-a-Service platforms, the code being developed cannot be properly executed outside of the Cloud. Yes, often it can physically be executed, but by its nature, it needs to interact with services that are simply not present on a local machine or reasonable to mock with any level of fidelity. This ultimately requires that developers operate a cloud-based "local" RTE.

Cloud-Based IDEs

Cloud-based IDEs exist (AWS Cloud9, for example); they attempt to move all development activities into the Cloud by supporting all of the required IDE capabilities within the Cloud. In doing so, they eliminate or minimize operational complexities that can result from not having all of the tools and capabilities needed for development operate within an environment under the developer's complete control. Unfortunately, these IDEs are not permissible at this time, because in moving all activities to the Cloud via web browser-based interfaces, they incur a potential for security vulnerabilities that GSA is not willing to tolerate. As a result, when developing software to run in Serverless environments, not all development activities can occur within the same platform.

Development Processes Laid Bare

The classic methods for software development are not without their problems, but they are a known quantity to the seasoned developer. Developing against Serverless platforms that prohibit local execution of code has the potential to hamstring a project's primary asset: the experience that developers have with their tools.

All of the potential problems within a Serverless environment stem from the relocation of development activities to a platform that is less familiar to the developers and seemingly not in their control, or from dividing activities across locational boundaries. Fortunately, these problems are not as daunting -- or problematic -- as they may first seem.

So how can we facilitate development practice in the Cloud when we cannot collocate all development activities in one location? Through examination of those very same activities a little more closely, we can find mitigations -- we might even find a few advantages.

Development Activities Defined

Let's examine each of the previously mentioned capabilities; for recap, those are:

For clarity, we've simplified and also referred to our "local" development environment as the "Developer's" environment, as all elements within these environments need to be usable by a specific developer, whether physically located on his machine or collocated with other features in the Cloud. Keep in mind that, of course, some aspects, such as the RTE, could be shared, as needs may be. More on that later.

Code Repository interaction

Pulling and checking code in is needed to support editing; it therefore needs to be done within the same platform as code editing. Developers need robust tools for interacting with the code repository, colocated with (and ideally integrated into) their editors. Code checkout is also needed for either building or deployment (depending on the needs of technologies used), as wherever these processes are to take place for a developer's RTE, there will be a need to get the developer's code changes there.

If the RTE and/or the build process is not colocated with the developer's editing capabilities, this will also require a process change. A team's developers may already practice the "save and check in often" principle, in which case, they are already using a parking or work branch. If a developer is not used to having a branch in the repository specifically for their unfinished work, then they will need to adjust to this, as this optional (if recommended) practice will become a requirement in order to transport the developer's working version of code into a place where it can be deployed to a remote RTE.

Code Editing

Developers expect a certain level of assistance with respect to code editing: intellisense, class checking, class path management -- all capabilities built into the code editors that are integrated into Industry-standard IDEs. Owing to the fact that Cloud-based IDEs are prohibited for use at this time for developing software for GSA, developers will be required to use more traditional IDEs, installed on Government-Furnished Equipment (GFE), whether that be a physical laptop issued by the agency, or a provided Virtual Desktop Infrastructure (VDI) instance. This has the advantage of allowing the bulk of a developer's work to occur within a toolset with which they are likely to be familiar and comfortable. In nearly all cases, IDEs approved for use at GSA provide some form of integration with code repositories, allowing code editing and interaction with repositories to occur within the same software development tool.

Developer's Build

Whether a codebase needs to be compiled or not depends upon the nature of the technology it is developed in. Technologies like PhP and Node.js/JavaScript do not need to be compiled, as they are interpreted, whereas languages such as Java, C#, and even TypeScript, need to be compiled. When a technology does require compilation prior to deployment, a build process is required; even interpreted technologies may benefit from a build process, in order to provide linting and packaging processes prior to deployment.

Regardless of the technology or need, creating a single build process that can be applied by both developers and CI/CD pipelines is an art. Often, code bases will maintain two different build processes (either formally or informally) -- one for developer use on their local systems, and one for use by the automated build processes typically executed on a build server (such as Jenkins). In order to use a single build process, care must be taken in the structuring of the build scripts to ensure that they are able to be easily executed by both the developer and the build automation. This often has impacts upon how the code is organized within the repository, and how environment and build variables are defined and stored, and how they are injected into build processes. Managing these impacts is critical to defining an easy-to-use build process, as these same areas are impacted by other considerations as well.

Developer's RTE Control and Operation

Software developers need control over an RTE in order to rapidly perform iterative development. For discussion purposes, let us take a typical Java Enterprise Edition (JEE) developer's RTE as a starting point and comparison to Cloud development operations, as it is still a fairly common platform for GSA software applications.

Developers author code using their editor or IDE, then build and deploy their new version code to a local RTE, where they can test the functioning of the code using a number of different tools and techniques. They analyze the current version's responses, and make adjustments to the code, once more, build and deploy the code again, until the code is both complete and performing as intended. Developers can do this rapidly because they have a local instance of the RTE -- Of course, there are other reasons to check code in, such as loss prevention, or in the case of operation within GSA's Standard VDI environment, the impermanence of the system itself. This is where the use of work branches (sometimes referred to as 'parking branches', because they allow the developer to 'park' incomplete code) comes into play, as well as a number of other branching strategies. in the case of JEE applications, this would be an instance of the application server (such as JBoss, WebSphere, or Tomcat) and the JDK (Java Development Kit), which acts as an execution engine for the Java bytecode that constitute the application server and the deployable file resulting from building the developer's code. The developer can control their local RTE, starting and stopping execution as needed, deploying to the local environment as needed; because the developer's version of the code is running in an RTE that is isolated to their own machine, the changes the developer makes largely do not impact the work of others who may be working on the same or adjacent code. All of this supports the development process and typically takes place prior to the developer checking the code back into the repository, allowing them to check code in only when their work is complete.

It is important to note the level to which encapsulation is baked into JEE development. Java in general, and JEE in particular, makes extensive use of encapsulation at a number of levels, alongside liberal use of the Delegation Design Pattern. Java libraries are compiled and packaged into JAR (Java ARchive) files, Java applications are compiled and packaged into WAR (Web application ARchive) or EAR (Enterprise Application aRchive) files; these are unpackaged into predictable file structures so that the location of where constituent files reside can be passed to the JDK or JRE (Java Runtime Environment). JEE application servers and the JDK itself come packaged with and use so many libraries that facilitate, standardize, and manage communication between the developer's code and any external resource that it can be easy to forget that those components are there and separate from the language itself. Over the years, these mechanisms have been baked into the development process, and are often now taken for granted. Any shift in the technological platform (be it hardware, middleware software (such as application server software) or software language) requires that we reexamine our assumptions and how we work, and leverage our knowledge of how these complex ecosystems work to understand the new platform.

As intimated earlier, not all development practices require that the developer's RTE be colocated with the developer's code editor, or even dedicated to a single developer. This is reminiscent of older methods of software development, and like with the resurgence of Structured Programming, these methods too are cyclical. The key feature needed for practicing iterative development is the ability to execute code independent of others' activities, in an environment where deployment can occur frequently and rapidly, and from which the developer can obtain adequate feedback to monitor the state of the code, and if necessary, terminate run-away or dangerous execution. Any toolset which meets these requirements should be sufficient to support an experienced developer, provided they make the effort to acclimate by mapping their experiences onto the new toolset.

Deployment to the Developer's RTE

Code (in the form of deployables, in the case of Java and other compiled or translated languages) created or edited by developers within their IDE within a local environment need to be deployed to wherever the RTE resides. When iterative development is being performed solely within a developer's GFE (physical hardware or within a VDI environment), the act of deploying the code to a locally running RTE is typically trivial. There are numerous ways in which deployment can be handled; frequently this is done differently for local deployment than how deployments are handled for deployment into teams' SDLC environments. If a developer's RTE is no longer colocated with his editor (or where build processes occur), then deployment of iteratively-modified code becomes more complex.

Debugging within the Developer's RTE

Again, using JEE development practices as a jumping-off point, developers execute code within an RTE that they control as part of standard iterative development practices, in order to ensure that the code they have written thus far does what they expect it to, and if it does not, to identify at what step it is failing, and how. There are numerous techniques involved in this practice, but all of them require gaining access to output from the RTE where their code is being executed. The quicker the feedback process, the faster the developer can iterate, and the faster and more accurately they can develop code; in short, debugging has to happen where the code is executing; if the code is executing in the Cloud, then that is where the debugging must happen, too.

As their cornerstone development activity, a developer will write or edit code within their editor (typically a standard function of an IDE); they will then build the code, and deploy it to an application server running locally on their GFE. The developer then examines purpose-defined output of the code to determine whether or not execution is flowing through the code as desired.

The output is the key, though it can come in a number of forms, depending upon the techniques used:

Regardless of the mechanism in which it comes, feedback as to the execution state in the the form of some kind of output is at the core of how developers use iterative development practices to author software. Methods for obtaining the feedback and techniques around the management of the feedback-development loop may vary, but the processes are similar.

The Net Impact to Development Processes

Now that we have broken down the capabilities and concerns developers need an IDE and related toolset to possess, let us discuss how moving the target deployed resources to the Cloud as part of a Serverless platform.

Code Editing and Repository Check-In

The editing of code and the interaction of developers with the code repository in support of making code changes remains the same; developers can use their preferred GSA-approved IDE from either their GSA Government-furnished laptop or from a GSA-provided VDI instance. In the case of VDI, which type of VDI instance depends upon developers' needs and the needs of their program.

This is the case regardless of whether or not a program has opted to utilize Cloud-based or local developer's RTEs.

Access between the repository location and the developers' platform (GFE or VDI) will need to be enabled via an appropriate Firewall Change Request, if access has not already been established. It is easy to forget this step, but ensuring that it has been taken care of prior to the start of work will significantly reduce headaches.

Remote Build and Deploy Processes

The ability to build (as previously discussed, as per the needs of the technology being used) and/or deploy code is handled by a Pipeline (read the strategy discussion for Jenkins CI/CD Pipelines for more details). The Serverless Compute Model offers components and solutions that allow programs to establish CI/CD pipelines for all of their SDLC environments. This includes capabilities that also support the instantiation ad hoc cloud-based RTEs, suitable for use as developer RTEs.

While the use of local developer RTEs may be suitable for running individual code units (i.e. Lambda function handler code and related unit tests), interactions with other Serverless services and resources (e.g. SNS, SQS, Step Functions, etc.) may require the use of remote RTEs, depending upon the need of your program.

What separates a remote developer RTE from the team's SDLC Development environment are two factors:

  1. Where the code comes from (remote developer RTEs should obtain code from purpose-built workbranches, and not from any branch that is part of the team's SDLC process)
  2. The permanence of the environment (remote developer RTEs should be created dynamically as needed, and dismantled afterwards)

If developers do not have access to Cloud-based RTEs against which they can develop, then the Development Environment is the first time that code will have been executed within the context of services that are only available within the Cloud. This means that the code intended to be promoted to the next SDLC environment will have included any defect that may only be visible when code interacts with Cloud-based services. This can present an increased risk of defects, or at least require that a certain class of defects can only be trimmed at a later stage within the development process, thus increasing the cost to detect and remove those defects.

However, the use of remote developer RTEs does come at an increased operational cost and additional complexity within your pipeline tools configuration in order to support the instantiation and management of the ad hoc environments.

If your program does need to support remote developer RTEs, some of the greatest impacts to tooling will be in how build and deployment processes are implemented. These processes need to be designed to support interaction with workbranches and the deployment to non-SDLC-related (sic. ad hoc) environments. Because of the highly customizable nature of the pipeline IaC provided as part of the SCM, these considerations need to be kept in mind by the tenant, as they can impact many other aspects of their tooling choices and SDLC implementation.

Remote RTE Management

As mentioned previously, the ability for developers to control their RTEs is critical for iterative development. Developers need to be able to stop and start the environment in order to run tests, provide new input, and review results.

Within AWS, this can be accomplished in a number of ways:

Control methods through Jenkins jobs and through the AWS Console have pre-existing implementations available with robust security solutions, while access through the AWS Command-Line Interface (CLI) requires discussion with your Security Engineer.

Debugging in the Cloud

Developers will need to perform debugging activities within the Cloud environment, whether or not their developer's RTE resides on their local machine or within the Cloud itself; this is where the greatest operational difference to processes may arise, as it requires developers to learn new tools in order to obtain the necessary output needed to perform debugging and testing validation.

Each Cloud platform has its own tools and services for logging, monitoring, and debugging; in terms of execution within AWS, here are the specific services available:

Both CloudWatch and CloudTrail could be used to monitor a service, however CloudWatch would tell you about how the resources responded to any given request, as well as allow you to aggregate that data to gain an overall performance metrics view, whereas CloudTrail would allow you to identify who made the request, providing a means to monitor access compliance and security. Meanwhile X-Ray would allow you to trace a specific request end-to-end to observe how various components responded.

In addition to these powerful tools, many services, such as AWS Lambda and Step Functions, provide additional views into events and activity around service instances, as well as several forms of debugging tools.

Dynamically Allocated Remote Environments

Within this article, the term dynamically allocated remote environment is used to describe a Cloud-based infrastructure instance that has been created for a specific, limited purpose, and that will be destroyed after it has served that limited purpose. It is considered remote, as it does not exist within the developers' GFE or VDI instances. Such an instance operates independently of other instances (such as the Development or Test Environments), and may or may not contain all elements or components of the entire software system. This brings up four questions: why, when, where, and what?

Why do we need a dynamically allocated remote environment?

As previously stated, developers may need a place in which to experiment with (i.e. test) how code they are developing interacts with up-stream and down-stream processes, beyond what can be simulated within their local machine. When developing microservices and systems utilizing FaaS technologies, the deployables are smaller relative to the overall scope of the system, so this need will arise more often and sooner in the development process. Ultimately, the goal is to allow code to be iteratively developed in an environment that more accurately represents the execution environment, thereby allowing issues to be detected earlier in the process (or prevented outright) when they cost less to eliminate.

When do we need to create such an environment?

A cloud-based RTE may not necessarily be required for every bit of development. When the portion of the system being developed has significant interactions with other services or components, or where the functionality has a highly dynamic computational need based upon the inputs, it might be more beneficial to develop the code within a Cloud-native environment. This work may be done by more than one developer, or may require that separate units of work (e.g. user stories) be carried out in coordination with each other within a shared environment. The creation of environments (and therefore the allocation and expenditure of Cloud resources) should be tied to the work at hand, rather than maintaining extra fallow environments. This dynamic allocation of resources (née ergo environment) is one of the chief benefits of Serverless technologies.

Where do we actually create the environment?

As mentioned within the article on Separate Accounts for Serverless, the terms environment and account are not used synonymously; the Development Account may contain multiple development environments, including environments that function as developers' RTE. In order to allow for dynamically allocated environments, the pipeline tooling needs to be capable of receiving code input from an equally dynamically defined workbranch, and then encapsulating resources, so that the newly instantiated resources will not conflict with existing component instances belonging to other instances of the system or subsystem.

So What Exactly is an Environment?

Ultimately, an environment is a namespace, configuration, and code branch, unified by a limited purpose, and instantiated by a pipeline into a set of isolated resources. The pipeline needs to accept dynamically defined (i.e. not previously known or defined) workbranch names because the workbranch contains code content not yet ready for inclusion within the Development Environment. The namespace is used when the pipeline instantiates the resources in order to avoid naming conflicts, allowing the new resources to operate independently from other instances that might already exist. Systems or subsystems that need to communicate with other components may need to be built with a means of injecting (or configuring) the names/handles of required resources or components in order to ensure that each instance is talking to the correct instance of the other resource; this may require a that a configuration mechanism be implemented within the pipeline itself, allowing for the injection of dependent resource names into the instance at the time of creation.

It is recommended that resource names not be defined within the codebase itself, as this can allow for names from one environment to leak into another accidentally; the namespace can also be useful in defining and managing this kind of resource configuration.

VDI as a Developer Platform

There are several reasons to recommend VDI as a development platform. These reasons can be organized into two key areas:

For these reasons we recommend that developers consider using VDI as their development platform of choice for developing software for GSA.

There are additional benefits to using VDI as a development platform; those will be described as well. In most cases, we also recommend that teams take advantage of the FAS Toolset for VDI, or FAST VDI.

Consistent Tooling

Oftentimes, differences between how developers configure their environments can impact the development process, and hinder collaboration. Tooling configurations or even the use of different tools can change how defects appear, or even hide or exacerbate issues. Getting consistent behavior out of development tools determine how productive a team is. VDI with the FAST VDI toolset can help with this, as it will ensure that all of the team's VDI instances are configured correctly and consistently, as instances are created from scratch each time developers log in.

The FAST VDI Team works diligently to ensure that the tools provided are configured correctly, and ready to use. Key tools, such as Microsoft Visual Studio Code (VSCode) and Eclipse have been configured to ensure that the workspaces are persisted on users' shared drives, and that they are capable of retaining plugins and IDE workspace configurations.

Fast Path Towards Productivity

A key feature of VDI is that it can be accessed through CITRIX, allowing instances to be accessed not only from GFE, but also from users' personal devices. This has the benefit of facilitating productivity for new developers and teams; as long as team members have been given provisional clearance (chief of which, they have been issued an ENT login and GSA email address), they can be issued a VDI instance and be productive almost immediately. This was particularly useful during the early phase of COVID-19 Pandemic, as developers had to wait to receive GFE through the mail. Because of VDI, developers could be productive before their GFE arrived and was configured.

Additionally, FAST VDI requires a single ServiceNow ticket to be issued in order to install over twenty software packages (and growing) that would otherwise require separate tickets to install.

Collectively, these features allow new development teams to ramp up quickly, as they significantly reduce the time needed to get equipment configured and operational.

VDI, Security, and the Cloud

The resources (DBs, application APIs, code repositories, etc. used in lower SDLC environments) used by developers need to be accessible by the development platform. When the platform is government-furnished equipment, resources need to be accessible from an exceptionally large IP range, as GFE IPs are dynamically assigned to machines as they are connected to the network (either directly connected within the facility or via VPN).

The Admin VDI option has a feature that can facilitate security. Development teams can request that the VDI Team create an Admin Group for their team; users added to the Admin Group have a special Admin VDI instance (this is different than the default Admin VDI instance, which can be confusing at first) within their VDI dashboard for connecting to this special Admin VDI instance. These special instances are all created within a subnet designated specifically for the group; only members of the group will have instances instantiated within the subnet, and the subnet does not overlap with any other groups' subnets.

Using an Admin VDI Group effectively provides a limited IP range for your team, that can then be used when defining firewall rules. Firewall rules can be created to only allow access to your team's resources from your team's Admin VDI Group IP range, thereby greatly reducing the ranges to which your resources need to be exposed. This can significantly increase security, and may be of particular value for teams developing functionality that services sensitive data.

Support for Less Common Tools

Users can request software packages to be installed as part of their VDI instance, above and beyond what is included within FAST VDI; this is done in the same manner as requesting software for GFE laptops. The only caveat is that software has to be prepared and packaged for delivery to VDI images; as a result, not all software that has been approved for use at GSA can be installed on VDI.

Users may request that software that is currently not available be packaged for VDI. Additionally, FAST VDI provides a number of tools, and if a development team feels that a software tool should be added to the FAST VDI tool set, they can work with the FAST VDI Team to have that tool added.

Admin VDI differs from Standard VDI, in that Admin VDI instances exchange access to the Internet for heightened rights within the VDI instance; users who possess the license and installers for the software they require may be able to install the software onto the Admin VDI instance themselves. It should be noted that, while Admin VDI instances allow for greater flexibility and more persistence, there may still be software packages that do not function properly when installed within the Admin VDI platform.

If a team needs specific tools that cannot be installed on VDI, operating from GFE (either wholly or in part) may be the only option for developers.

Regardless of the platform from which a development team chooses to operate, the concepts discussed in this article should help to outline the considerations and tools needed to make Serverless development a success.