Costing

Overview

·         We offer several contracting strategies including Firm Fixed Price (FFP), Cost plus Fixed Fee (CPFF), Level of Effort, and Hourly.

·         With detailed specifications, we will perform FFP contracts.

·         We work smart, not hard. We use proven technologies and low-risk techniques wherever possible.

·         We have learned from the Software Costing Experts. We have utilized COCOMO II.

·         We perform most work remotely – that saves considerable overhead costs and is highly advantageous to recruiting skilled talent.

·         We tend to use small, lean multidisciplinary teams but back them up with very highly skilled Program Managers and Software Subject Matter Experts (SMEs) who have a proven track record in solving major IT program-level issues, both technical and non-technical.

Details

Software development is difficult and often unsuccessful, to the extent the problem has a label: Software Crisis. According to Gartner, 80% of software projects fail. If your project or program is in jeopardy, in addition to standard program management approaches, here are some strategies your organization might use. Note that the following approaches are unique to managing IT programs.

We have rescued 8 major US Government programs from being categorized as failures. The solutions have ranged from hardware migrations, virtualized software, cloud infrastructures, wrapping or encapsulating lower-level software, and integrating new or different components. We understand how to do software costing and are willing to back it up with a firm-fixed-price contract.

The GDA Group has learned from a leader in the software costing discipline, Dr. Barry Boehm, and we understand the COCOMO methodology and apply it as needed.

1        Fixed Price Contracts

Even though software development is difficult and risky, if you have well-defined requirements and specifications and a detailed architecture, The GDA Group will be pleased to write a Firm fixed-price contract. This is very unique in the software industry, and you should take advantage of it!

2        Cost Plus Fixed Fee

This is the average arrangement. It does minimize the contractor’s risk and puts most of the risk on the customer.

3        Hourly

The rate will depend on the nature of the work.

4        Level of Effort

This type of arrangement is useful in breaking up a large effort into phases, and where each phase must be done in sequence. For technically risky projects, this minimizes the capital outlays in the event the project/program fails early on.

5        Risk Management Strategies

Risk management techniques are critical to the success of a program and have a huge effect on both the program’s cost and schedule and even whether the program will succeed or not. The GDA Group uses and highly recommends the following strategies as described for any IT program:

·         Proof of Concept

·         Rapid Prototyping

·         Rapid Application Development

·         Minimal Viable Product

Techniques

1        Out of the box Thinking

Linear-type thinking in software development is usually problematic and a good reason that most software development programs fail. Software is supposed to be innovative – and innovative thinking is needed to architect, design, and develop it. Many of the following cost and pricing techniques rely on out-of-the-box thinking.

2        Commercial Off the Shelf (COTS)

We highly recommend buying or licensing software that provides most of what you need, that way you are not “reinventing the wheel.” The use of Commercial Off the Shelf (COTS) software can be a solution. While some COTS products are not customizable, some products are being developed so they can be customized. Salesforce is an example of being customizable. While you cannot add a new custom payment method, you can choose from as existing list and select which payment methods you want to allow your customers to choose from.

3        Open-source software (OSS)

Open-source software is another viable choice. Even if the OSS doesn’t do everything you need, it can generally be modified and tweaked by modifying the source code to do what you want. We utilized Keycloak and Accumulo for the Air Force Intelligence Agency and it is one of our great OSS success stories.

4        “Leading edge” technology

“Leading edge” technology can easily make the difference between a “fair” product and a “great” product; it can also make the difference between a project that fails and one that is successful. If the product is OSS, it is generally reasonable to proceed since any defects can be resolved by just modifying the OSS. If the product is proprietary, it is risky and demands the use of risk reduction strategies. One such strategy is obvious: extensive testing of the “Leading edge” technology. Another strategy for risky “Leading edge” technology is to segment it from the rest of the application by using a façade type pattern or an API. Any deficiencies of the “Leading edge” technology can usually be handled in one of these two layers.

 

The GDA Group’s successes using “Leading edge” technology:

·         Air Force Logistics Command – 5GL and automated migration

·         FBI – Automated Testing Tool

·         JSOC – Simulator

·         Vricon

5        5th Generation Language (5GL) Development Tools

Many studies by prestigious software industry researcher organizations including Gartner and Forrester show the importance of using 5GL development tools to not only save time and thereby cost but to allow for software development to be done by end-users.

 

For the above reasons, most modern software development relies on 5GL which utilizes visual development tools and drag-and-drop interfaces. These types of Tools generally provide GUI capabilities to design first the back-end data storage capabilities. Once the back end is designed RAD tools automatically produce front-end functionality in a highly automated manner using database schema as the primary input. Users (not just developers) can modify the database and web forms including layouts, data editing, and data restrictions.

 

Such 5GL products fall into the category of no code or low code tools. While a detailed comparison of these tool types can be found here, the following is a summary.

 

Low-code tools do their magic usually by generating either 3GL or 4GL code to implement the needed functionality. The code can usually be modified or enhanced by a developer with the requisite 3GL or 4GL skills to provide different functionality. Thus, there is a lot of flexibility to change the default functionality provided by the RAD tool.

 

Microsoft Access and Oracle Forms and Reports were the earliest forms of low-code platforms but have not kept pace with either web-based capabilities or more modern RAD products.

 

No code tools do not produce any significant code that could be changed by a developer. Such products cannot be tailored or modified by developers in any significant way. They should be avoided in favor of low-code RAD tools.

 

We have developed several products using 5GL RAD-type tools and products including Microsoft Access, Oracle Forms and Reports, PowerBuilder, and SIB VisionX.

 

Software development using 5GL success examples are the Air Force Logistics Command.

 

6        APIs

Many COTS and OSS software packages are designed for easy integration with other COTS and OSS products by being designed with one or more Application Program Interfaces (API). A parent software product can use a dependent software product’s capability if the dependent software product is designed with an appropriate API. SOAP and REST-type APIs are the more commonly used APIs.

 

Some of our success stories are the Air Force Logistics Command and DISA.

7        Artificial Intelligence

While Artificial Intelligence (AI) is perceived by the public as a possible solution, many AI Experts see AI as contributing in a major way to the software crises. But if used wisely and carefully, AI could be a substantial tool for a software product. Since AI depends on large amounts of data, a data lake is a prerequisite to get started.

 

Our experience with the NGA shows our AI capabilities.

8        Virtualization

Virtualization techniques are widely used when trading out older hardware in favor of new hardware, new operating systems, or BIOS-type software. It is also very useful in testing software and hardware because it can be used to introduce strange behaviors not practical to realistically simulate using production-like systems. Virtualization techniques can also help to manage hardware resources carefully by having several virtual machines running on the same physical machine.

 

Good examples of this technique are what we did for

·         NGA in replacing old dumb terminals with newer modern Unix Workstations.

·         NRO in system-wide failure simulation.

9        Containerization

Containerization is the next logical step up from virtualization in terms of efficiency. In Virtualization, an operating system such as Linux is present in each VM. Containerization improves efficiency by sharing the same operating system over many – up to hundreds, of containers. Each container can be doing the same functions as a VM.

 

1           What we did for the FBI and Air Force Logistics Command are good examples of our use of containerization techniques.

10 Simulation

There are many cases where some functionality should be simulated rather than relying on actual production functionality. This is especially true when software pieces (i.e. a sub-program) that must ultimately work together for scheduling reasons need to be developed separately but in parallel. The sub-programs are initially simulated so the higher-level main program can be developed in a stand-alone mode. Simulating failures that can’t easily be done with an operationally like system is another use case.

 

The following are some real-world examples of using simulation:

NRO

NGA

Air Force Logistics Command

11 High-Performance Computing (HPC)

HPC takes advantage of Graphics Processor Units (GPU) versus the more commodity Central Processing Unit (CPU). GPUs can address many memory units at once, whereas CPUs generally address only one memory unit at a time. This makes GPUs much more efficient for mathematics that needs parallel operations at the same time. Our Vricon experience exemplifies our HPC expertise.

12 Cloud Infrastructures

Modern on-premises data centers need a lot of capacity because they need to handle a worse caseload during business hours. After business hours, most of this capability is unused and therefore wasted. To be able to handle a disaster or for contingency purposes, a redundant data center is generally needed, at least doubling the problem. With Commercial Cloud Providers such as Amazon, these efficiency problems would not exist. Unused capacity would be rented out or leased to other customers or users. And unlike on-premise data centers,  with CCPs, you can scale capacity as needed on demand, so you are only paying for what you are using. This gives CCPs, while not necessarily economical, a significantly lower Total Cost of Ownership (TCO) when compared to dedicated on-premises data centers. Our experience with the FBI, Vricon, and JSOC demonstrates our capabilities.

Successes

1        Air Force Logistics Command

The Air Force had a requirement to migrate from an obsolete Oracle Forms and Reports application front-end and an Oracle RDBMS back-end to an AWS Java environment. We developed an architecture, performed product selection, and prototyped the new system to prove the migration methodology. The new system used a modern RAD development tool, SIB VisionX, and could automatically produce modern Java code based on the Oracle Forms and Reports high-level PL/SQL coding. We prototyped and migrated multiple forms and reports from the old system to the new system using SIB VisionX.

2        JSOC

JSOC provided production environment and support for a wide group of custom JSOC applications, most intended to run in a Kubernetes environment provided by Rancher Federal on AWS Commercial, C2S, and SC2S enclaves. We identified several architectural issues with missing components. These missing components included the ability to automatically backup, restore, and generate a run IaC to clone VPCs and other elements as needed. N2WS was selected, prototyped, and integrated to implement this requirement. Development and Test environments were needed for the development of the custom applications. We introduced Sequoia to allow simulated classified software development to be performed using non-cleared personnel, reducing the number of cleared personnel needed onsite.

 

3        FBI

We prototyped and architected multiple solutions to migrate a major legacy on-premises operational FBI application to an AWS environment. The solution we implemented involved multiple simultaneous efforts.

·         Migrated server-based products to run on AWS instances

·         Migrated Java executables were to docker containers

·         Migrated build pipelines from Ansible to Gitlab

·         An automated test capability was provided by a new COTS product

Work was done on GovCloud, SC2S, and C2S AWS Regions

4        Vricon

Architected and implemented a new AWS-based infrastructure for Vricon a company headquartered in Sweden that created 3D models from 2D geospatial satellite imagery. The principal in-house product was a closely coupled HPC-based software product. The solution involved many unique features of AWS including HPC and parallel file systems. The architecture had to be able to support cloud-based features such as automatic scaling and spot instance additions to the infrastructure as needed for operations. The solution involved extensive software development using Python and AWS CloudFormation JSON.

 

5        DISA

Developed a programmatic interface to SharePoint without using the standard REST API due to limitations in the provider’s offering. The customer’s version of SharePoint only supported authentication via the TLS protocol using both client and server certificates.  Developed extensive prototyping and experimentation using a combination of a simulated server, multiple browsers, and multiple versions of software written in several languages and environments.

6        Air Force Intelligence Agency

 

"DEWEY" was an Object-Based Production (OBP) system built directly on open-source software including Docker, Kubernetes, Accumulo, MongoDB, Keycloak, and MS Active Directory intended for use by U.S. Intelligence agencies. The major objective was to provide and execute a roadmap to migrate a DEWEY prototype system first to an MVP and then to a production-ready product.

The solution was a new PaaS. Alternative PaaS were evaluated including Docker EE, Pivotal Cloud Foundry, OpenShift, Cloudera, and Hortonworks, with OpenShift being the recommended PaaS. Designed a new security approach to address various authentication and authorization options that would ensure and preserve MAC and security labeling for a highly classified DoD environment. We evaluated several approaches for getting the system accredited for operations on JWICS and other networks. Work was done in an Agile environment using Scrum methodology within JIRA. Provided a version of Dewey running on AWS using OpenShift as a PaaS with OpenShift’s embedded Kubernetes for containerization. Updated the Database API to include Accumulo as a SaaS for cell-based access control. Identified and resolved availability, scalability, and security issues.

7         Army IT Agency

Proposed, prototyped, integrated, demonstrated, and documented a new cloud-based system architecture for the Army IT Agency, Chief Technology Officer. The new system was to be able to test new software and capabilities before migration to the operational infrastructure.

8         NGA

NGA had 300 hard-wired terminals globally distributed that were obsolete. We developed an improved architecture and provided a solution. We migrated a critical enterprise-level application from a hardwired terminal to a modern GUI running on Unix workstations. The solution involved minor changes to the enterprise-level IBM VM custom application, virtualized software to capture Input/Output operations, and a new custom terminal emulator on the Unix workstations. The new system greatly improved end-user communications since it used TCP/IP as the protocol versus RS-232 ASCII protocols. It also gave the program the ability to provide users with a whole suite of modern workstation-based tools and products, some of which were custom-developed and integrated with the rest of the solution.

 

For another NGA project, we developed software that could go back in time and replicate a previous satellite tasking strategy, saving users a large amount of effort to repeat the same tasking manually. While not quite Artificial Intelligence, it was close to it.

 

9        National Reconnaissance Office (NRO)

The organization needed the ability to deploy a classified intelligence collection system covertly in foreign territory. The system needed remote and unattended 24/7 operations spanning months to years. The original contractor developed both the hardware and the software, however on the first major test, the system failed within a few days. To make matters worse, there was only one hardware suite and no software simulation capability which made extensive full-scale testing impractical to meet for the mandatory deployment schedule.

 

We were brought in to resolve the above problems. Fixes and changes entailed:

·         Changed the poorly designed timer/Deadman switch so it was ALWAYS active and coded in assembler language.

·         Ensured the system was restarted if the timer/Deadman switch went beyond a customizable number of seconds.

·         Developed a simulation capability for the software that could inject many types of failures into it such as failed operations, and bad data.

o   Many such failures could only be simulated in this way, not via full-scale testing in short durations.

·         The software would run normally if not in simulation mode