Ogun State,Government House

8AM – 5PM


Advancing Government Services With Responsible Generative AI


Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Secure and Compliant AI for Governments

Change management and inclusive policies that support workers will enable the public sector to tap the full potential of AI while ensuring no one is left behind. For ultimate caption security, you can combine Encoder Pro with LEXI Local, which delivers live automatic captions on-premises and off the cloud. This provides elevated security and greater control over your data, without compromising the quality of your captions. Pair LEXI Local with our Encoder Pro SDI encoder for a reliable, low-latency captioning solution that’s trusted by government agencies worldwide. On October 30, 2023, President Biden issued an executive order (EO) to set new standards for the safety and security of Artificial Intelligence (AI).

Secure and Compliant AI for Governments

Given the reality of how data is shared and repurposed, shared dependencies—and therefore vulnerabilities—among systems will be widespread for better or worse. As a result, there is a need to rapidly understand how a compromise of one asset or system affects other systems. Determine how AI attacks are most likely to be used, and craft response plans for these scenarios. This implementation decision should state how much AI should be used within an application, ranging from full use, through limited use with human oversight, to no use. This spectrum affirms that vulnerability to attacks does not necessarily mean that a particular application is ill-suited for AI. Instead, suitability should be measured by the informed results of the suitability test, especially the questions regarding the consequences of an attack and the availability of other options.

More federal agencies join in temporarily blocking or banning ChatGPT

For hundreds of years, humans have been wary of inscribing human knowledge in technical creations. The DoD, for example, has already shown attention to understanding and addressing the security risks of employing AI. However, in other contexts, such as in industry settings where parties have shown a disregard and inability to address other cyber risks, these discussions may need to be forced by an outside regulatory body such as the FTC. Policymakers and industry alike must study and reevaluate the planed role of AI in many applications.

What countries dominate AI?

The United States and China remain at the forefront of AI investment, with the former leading overall since 2013 with nearly $250 billion invested in 4,643 companies cumulatively. But these investment trends continue to grow.

Although no laws yet exist in the United States for regulating AI, there are an increasing number of guidelines and frameworks to help provide direction on how to develop so-called ethical AI. One of the most detailed was recently unveiled by the Government Accountability Office. Called the AI Accountability Framework for Federal Agencies, it provides guidance for agencies that are building, selecting or implementing AI Systems. They can complete classification as well as use historical data to make predictions for the future. Limited memory machines are modeled after the way human neurons connect and share information in the brain. However, limited memory machines require large volumes of data to train the algorithms.

Significance of the Executive Order on Artificial Intelligence

Further, AI attacks fundamentally expand the set of entities that can be used to execute cyberattacks. For the first time, physical objects can be now used for cyberattacks (e.g., an AI attack can transform a stop sign into a green light in the eyes of a self-driving car by simply placing a few pieces of tape on the stop sign itself). Data can also be weaponized in new ways using these attacks, requiring changes in the way data is collected, stored, and used.

This journalist’s Otter.ai scare is a reminder that cloud transcription isn’t completely private – The Verge

This journalist’s Otter.ai scare is a reminder that cloud transcription isn’t completely private.

Posted: Wed, 16 Feb 2022 08:00:00 GMT [source]

Many of the methods to verify these properties rely on openly publishing datasets, methods, models, and APIs to the systems. However, these exact actions double as a list of worst practices in terms of protecting against AI attacks. In already deployed systems that require both verified fairness and security, such as AI-based bond determination,74 it will be difficult to balance both simultaneously.

They also operate in a dynamic and evolving arena of complex regulation, where compliance is mandatory given the nature of work and customers. All this is compounded when running GRC programs across multiple departments, where a certain level of standardization and oversight is required, as well as a level of segregation and autonomy depending on maturity and sensitivity of their work. But the theme that we’re hearing across the board is how we can transform the way they can deliver services to citizens that could really drive critical outcomes,” Ling told FedScoop.

What is the NIST AI Executive Order?

The President's Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence (14110) issued on October 30, 2023, charges multiple agencies – including NIST – with producing guidelines and taking other actions to advance the safe, secure, and trustworthy development and use of Artificial Intelligence ( …

“Artificial intelligence (AI) systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way. Cyber security is a necessary precondition for the safety, resilience, privacy, fairness, efficacy and reliability of AI systems,” the CISA guidelines say.


Unlike traditional cybersecurity attacks, these weaknesses are not due to mistakes made by programmers or users. Put more bluntly, the algorithms that cause AI systems to work so well are imperfect, and their systematic limitations create opportunities for adversaries to attack. Just as the FUSAG could expertly devise what patterns needed to be painted on the inflatable balloons to fool the Germans, with a type of AI attack called an “input attack,” adversaries can craft patterns of changes to a target that will fool the AI system into making a mistake. This attack is possible because when patterns in the target are inconsistent with the variations seen in the dataset, as is the case when an attacker adds these inconsistent patterns purposely, the system may produce an arbitrary result. As a result, while it may have been necessary to make the balloons actually look like tanks to fool a human, to fool an AI system, only a few stray marks or subtle changes to a handful of pixels in an image are needed to destroy an AI system.

AI Regulation Is Coming – HBR.org Daily

AI Regulation Is Coming.

Posted: Tue, 17 Aug 2021 02:07:46 GMT [source]

These are successes that, morals aside, may have evoked jealousy from the marketing departments of Fortune 500 companies. If confronted with better content filters, they are likely to be the first adopters of AI attacks against these filters. In the hardest case where nothing about the model, its dataset, or its output is available to the attacker, the attacker can still try to craft attacks by brute force trial-and-error. For example, an attacker trying to beat an online content filter can keep generating random attack patterns and uploading the content to see if it is removed. The methods underpinning the state-of-the-art artificial intelligence systems are systematically vulnerable to a new type of cybersecurity attack called an “artificial intelligence attack.” Using this attack, adversaries can manipulate these systems in order to alter their behavior to serve a malicious end goal. As artificial intelligence systems are further integrated into critical components of society, these artificial intelligence attacks represent an emerging and systematic vulnerability with the potential to have significant effects on the security of the country.

Risk #7: Ethical and Moral Dilemmas

Finally, my Administration will help develop effective labeling and content provenance mechanisms, so that Americans are able to determine when content is generated using AI and when it is not. These actions will provide a vital foundation for an approach that addresses AI’s risks without unduly reducing its benefits. (b)  Promoting responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges. This effort requires investments in AI-related education, training, development, research, and capacity, while simultaneously tackling novel intellectual property (IP) questions and other problems to protect inventors and creators.

  • Given that AI is increasingly being used in high-stakes applications in the public sector and several instances of harm have resulted from this, efforts are emerging to govern and regulate public sector applications of AI, with many being centred in the US.
  • Accelerate content creation, communication and understanding with our GDPR-compliant AI content platform that writes in your tone of voice.Our AI content tool ensures that all data is handled and protected in compliance with GDPR regulations.
  • The interagency council’s membership shall include, at minimum, the heads of the agencies identified in 31 U.S.C. 901(b), the Director of National Intelligence, and other agencies as identified by the Chair.
  • Readers of this website should contact their attorney to obtain advice with respect to any particular legal matter.
  • Rather than centrally collecting potentially sensitive data from a set of users and then combining their data into one dataset, federated learning instead trains a set of small models directly on each user’s device, and then combines these small models together to form the final model.

We’ll address each risk one at a time and provide practical tips on how to mitigate the risks using methods available today. But to accomplish this, local government officers need to become aware of the unique challenges Secure and Compliant AI for Governments of applying AI to government operations. That said, the potential benefits of applying AI to improve local government – especially to augment and empower overworked staff to do more with less – are enormous.

What is the Defense Production Act AI?

AI Acquisition and Invocation of the Defense Production Act

14110 invokes the Defense Production Act (DPA), which gives the President sweeping authorities to compel or incentivize industry in the interest of national security.

What is the executive order on safe secure and trustworthy?

In October, President Biden signed an executive order outlining how the United States will promote safe, secure and trustworthy AI. It supports the creation of standards, tools and tests to regulate the field, alongside cybersecurity programs that can find and fix vulnerabilities in critical software.

What is good governance in AI?

These best governance practices involve “establishing the right policies and procedures and controls for the development, testing, deployment and ongoing monitoring of AI models so that it ensures the models are developed in compliance with regulatory and ethical standards,” says JPMorgan Chase managing director and …

Leave a Reply

Your email address will not be published. Required fields are marked *