Ethical AI: How SGS’ Values Drive Responsible Standards

SparkCognition Government Systems | November 4, 2021

While artificial intelligence has grown more prevalent in our economy, government, and military, high expectations about how AI can improve these institutions frequently come with concerns about ceding too much control to AI technology too fast. Practicing specific norms to assure ethical AI principles is an essential process to help overcome this inherent tension.

Now, with the doubling speed of AI computational power outpacing Moore’s Law, and the global AI market expected to expand at a compound annual growth rate of 40.2% from 2021 to 2028, we are entering an era where AI will be virtually ubiquitous in our lives. Without a consistent adherence to ethical AI principles, the consequences of irresponsible AI-based outcomes will be widespread, as well.

In the commercial market, the outgrowths of irresponsible AI include data inaccuracies, discriminatory results, implicit or explicit bias, and privacy transgressions.

When AI layers in with national defense systems, the stakes are raised much higher. AI technology developed for national security supports critical missions deployed in dynamic and dangerous environments. The consequences of poorly designed AI here may compromise the safety of our military personnel and/or civilians, an unacceptable risk.

Recognizing this, the Department of Defense (DoD) embedded a Head of AI Ethics Policy within the Joint Artificial Intelligence Center (JAIC) in 2020, among other notable initiatives to operationalize ethical principles.

SGS takes responsible and ethical AI very seriously

SGS’ solid ethical reputation is a critical asset that we protect by understanding and following responsible principles at every step.

These principles include:

  • Human-in-the-loop review processes
  • Bias evaluation
  • Transparency and explainability guidelines
  • Reproducibility controls
  • Ensuring accuracy
  • Protecting privacy
  • Applying the highest level of data security possible


Our Digital Maintenance AdvisorTM product provides a prime example of how we incorporate human-in-the-loop review processes, transparency, and explainability into the design of our products. The DMA product uses natural language processing technology to automate the extraction of key information from large volumes of historical records—like records of maintenance activity on aircraft subsystems. After gathering tribal knowledge of senior maintenance personnel, DMA’s symptom mapping puts this data into a traceable workflow that less experienced maintenance can use for decision support. They can use DMA to quickly map from discrepancy to the most probable corrective action based on the unlocked historical records and down to the specific job guide or manual that is the underlying source of the recommendation. Rather than building a ‘black box’ solution, we designed DMA to keep the maintainer in the loop so they can understand and explain the workflow leading up to their decision point. This level of transparency helps the human subject matter expert trust the AI’s findings while essentially ‘upskilling’ themselves in the process.

Although designing and developing AI and machine learning solutions for the military per the highest ethical criteria requires talent, time, and cost, it’s not only for compliance reasons that we expend extra effort. We know that high ethical standards produce the best possible solutions for our customers—ones they can trust to perform their job precisely as intended.

How our values align with DoD AI principles

One of the nine values that we speak of in our code of conduct is ‘Customer first.’ In our case, this is almost always (directly or indirectly) the DoD.

In keeping with our values which keep us laser-focused on integrity and compliance, we commit to supporting the principles the DoD has adopted to uphold legal, ethical, and policy commitments in the field of AI. The solutions we provide will always stand up to the rigorous standards they set for deploying AI in combat or non-combat environments, which are:

  1. Responsible. DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  5. Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.


Our product development exemplifies our commitment to responsible, equitable, traceable, reliable, and governable AI. For instance, the Digital Supply AdvisorTM product builds in explainability at its core, demonstrated by our approach to feature importance identification: the data science techniques we employ to ascertain the connection and signal strength between different features within the data. We work to understand how one feature versus another impacts the outcomes of the models, which builds traceability into the AI reasoning. Generating a clear, methodical, and explainable path for why the model came to the conclusion it did ensures that the warfighter will understand the model’s recommendations, trust them, and be able to use them reliably.

Our team and board are standard-bearers for ethical AI in national defense

SGS benefits from exceptional teamwork among our experts at the nexus of artificial intelligence and defense to reinforce our standards for ethical AI. Working together, we not only push the envelope of what AI can accomplish—we push each other to be as thoughtful, careful, and accountable in developing our AI technologies as we are curious about what they should be tasked with.

Likewise, our board consists of some of the most decorated leaders in government and defense today. Their wisdom and experience contribute a deep perspective for responsibility, service, national security, and ethical leadership to guide us through the rapidly evolving AI era. One of our board members, Robert O. Work, recently served as Vice-Chair of the National Security Commission on Artificial Intelligence (NSCAI), which delivered its exhaustive Final Report earlier this year. Among its topline conclusions, recommendations, and accompanying blueprints for action, the report devotes two chapters to responsible and ethical AI priorities. Attached to the report is a 46-page accounting of “Key Considerations,” stemming from the NSCAI efforts to document how government agencies and the DoD should operationalize principles in terms of responsible and ethical AI. We commend the service of Sec. Work in delivering this historic report to Congress and the Executive Branch.

We can think of no better expression of our own SGS values as they relate to ethical AI than this guiding statement of principle from the NSCAI’s 2019 Interim Report:

  • The American way of AI must reflect American values—including having the rule of law at its core. For federal law enforcement agencies conducting national security investigations in the United States, that means using AI in ways that are consistent with constitutional principles of due process, individual privacy, equal protection, and nondiscrimination. For American diplomacy, that means standing firm against uses of AI by authoritarian governments to repress individual freedom or violate the human rights of their citizens. And for the U.S. military, that means finding ways for AI to enhance its ability to uphold the laws of war and ensuring that current frameworks adequately cover AI.


As the first full-spectrum artificial intelligence company devoted entirely to the government and national defense mission, our AI technology supports critical missions in sometimes dangerous operating environments. We know that the process we follow to design, measure, and evaluate our solutions don’t just matter for our business credibility; it may directly impact the safety of our military and the security of our country. We have to do things the right way, or not at all.

graphic about justified confidence to adopt and field AI
Infographic from Chapter 7 of NSCAI Final Report: “Establishing Justified Confidence in AI Systems”