The Fiji Times »Getting smarter and faster – Artificial intelligence and cybersecurity

Artificial intelligences (AIs) are getting smarter and faster. This creates tricky questions that we cannot answer at the moment.

AIs don’t yet have human-level abilities, and they might never have them. But there are questions of liability; rights and moral status that we still have to consider.

Today, AI covers a smart but limited set of software tools. But in the future, as artificial intelligence becomes more complex and ubiquitous, we may be forced to rethink the good and the bad about how we treat AI – and even how they treat us. .

AIs are narrow in nature, performing tasks like image recognition, fraud detection, and customer service.

But, as AIs develop, they will become more and more autonomous. At some point, they risk doing wrong.

Who is really at fault when AIs make mistakes is a question that will confuse businesses and excite lawyers as they struggle to determine who could and should be held responsible for any resulting harm.

Today, in most cases of AI problems, the fault is obvious. If you buy an AI and throw it out of the box, and it does something terrible, it’s probably the manufacturer’s fault. If you’re building an AI and training it to do something terrible, it’s probably up to you. But it won’t always be so clear.

The complications start when these systems acquire memories and develop an agency – where they start doing things that a manufacturer or user never intended or wanted them to do.

We may just have to accept that we won’t always understand why AIs do what they do and live with this uncertainty – after all, we do the same for other humans.

Over time, AIs can become so sophisticated that they will be held legally and morally responsible for their own actions, whether we understand them or not. In law, it is already possible for non-human entities to be held legally responsible for wrongdoing through what is known as legal personality: companies have legal rights and responsibilities in the same way as people. . Potentially, the same could one day apply to AIs.

This means that in the future, if we can convict AIs of a crime, we may even have to think about whether they should be punished for their crimes if they do not understand the right and the wrong. wrong of their actions, often a threshold for responsibility in humans.

When it comes to AI, cyberspace and national security, there are more questions than answers. But these questions are important because they touch on key issues related to how countries use increasingly powerful technologies while ensuring the safety of their citizens. For example, few subjects related to national security are as technical as nuclear security. How could the links between AI and cyberspace impact the security of nuclear systems?

A new generation of AI-enhanced offensive cyber capabilities will likely exacerbate the risks of military escalation associated with emerging technologies, especially untimely and accidental escalation.

Examples include the increasing vulnerability of nuclear command, control and communications (CL3) systems to cyber attacks. In addition, the challenges posed by remote sensing technology, autonomous vehicles, conventional precision munitions and hypersonic weapons to nuclear assets so far concealed and reinforced. Overall, this trend could further erode the survivability of a nation-state’s nuclear forces.

AI, and the advanced capabilities it enables, is a natural manifestation – not the cause or origin – of an established trend in emerging technologies. The increasing speed of war, the shorter decision-making timeframe, and the mix of nuclear, cybernetic and conventional capabilities are leading nation states to adopt destabilizing launch postures.

AI will make existing cyber warfare capabilities more powerful. Rapid advances in AI and increasing degrees of military autonomy could amplify the speed, power, and scale of future attacks in cyberspace.

Specifically, there are three ways AI and cybersecurity converge in a military context.

First, advances in autonomy and machine learning mean that a much wider range of physical systems are now vulnerable to cyber attacks, including hacking, identity theft and data poisoning. In 2016, a hacker stopped a Jeep on a busy highway and then interfered with its steering system, causing it to accelerate. Additionally, machine learning-generated Deepfakes (i.e., audio or video manipulation) have added a new, and potentially more sinister, twist to the risk of miscalculation, misperception and misperception. unintentional escalation that originates in cyberspace but has a very real physical world impact. The magnitude of this problem ranges from smartphones and home electronics to industrial equipment, roads and pacemakers – these applications are associated with the ubiquitous connectivity phenomena known as the Internet of Things (IoT).

Second, cyber attacks that target AI systems can offer attackers access to machine learning algorithms and potentially vast amounts of data from facial recognition and intelligence collection and analysis systems. These elements could be used, for example, to trigger precision ammunition strikes and support intelligence, surveillance and reconnaissance missions.

Third, AI systems used in conjunction with existing cybercrime tools could become powerful force multipliers, thereby enabling sophisticated cyber attacks to be carried out on a larger scale (both geographically and across networks), at faster speeds, simultaneously in several military domains, and with greater anonymity than before.

During the early stages of a cyber operation, it is usually not clear whether an adversary intends to gather intelligence or prepare for an offensive attack. Scrambling cyber defense will likely increase an opponent’s fear of a preemptive strike and increase first-come incentives. In extremis, the strategic ambiguity caused by this problem can trigger use or loss situations.

Open source intelligence suggests, for example, that Chinese analysts view the Chinese NC3’s vulnerability to cyber infiltration – even if an attacker’s objective was limited to cyber espionage – as a highly growing national security threat. In contrast, Russian analysts tend to view Russia’s nuclear command, control, communications and intelligence (C3I) network as more isolated and therefore relatively isolated from cyber attacks.

Certainly, even a minimum of uncertainty about the effectiveness of AI-enhanced cyber capabilities during a crisis or conflict would therefore reduce the risk tolerance of both parties, thus increasing the incentive to strike preemptively.

It is now possible that a cyberattack (i.e. its nuclear and non-nuclear command and control systems.

Paradoxically enough, AI applications designed to improve the cybersecurity of nuclear forces could simultaneously make cyber-dependent nuclear weapon systems (for example, communications, data processing, or early warning sensors) more vulnerable to cyber attacks.

Ironically, new technologies designed to improve information, such as 5G networks, machine learning, big data analytics and quantum computing, can also undermine its clear and reliable flow and communication, which is essential. for effective deterrence.

Advances in AI could also exacerbate this cybersecurity challenge by enabling improvements to cyber-crime. Machine learning and AI, by automating advanced persistent threat (or “weakness check”) operations, could dramatically reduce the vast human resources and high levels of technical skills required to perform advanced persistent threat operations. , especially against reinforced nuclear targets.

During a crisis, the inability of a nation state to determine an attacker’s intention may lead an actor to conclude that an attack (threatened or actual) was intended to undermine its nuclear deterrent. For example, an AI-enabled, third-party-generated deepfake coupled with data-poisoning cyberattacks could trigger an escalation of crisis between two (or more) nuclear states.

The explainability (or “black box”) problem associated with AI applications can further exacerbate this dynamic. Insufficient understanding of how and why AI algorithms come to a particular judgment or decision would make it difficult to determine if datasets have been deliberately compromised to produce false results, such as attacking incorrect targets, even allies. , or misdirecting allies during combat.

Rapid advancements in AI for military use and autonomy could amplify the speed, power, and scale of future attacks in cyberspace through several interconnected mechanisms: the ubiquitous connectivity between physical and digital information ecosystems; the creation of vast treasures of data and intelligence gathered through machine learning; the training of powerful force multipliers for increasingly sophisticated, anonymous and possibly multi-domain cyber attacks.

Despite all of these high-tech AI scenarios, as someone wisely commented once: “The exact moment the technology got out of hand: When social media and phones met…” As always, be bless and stay safe in the digital and physical worlds this weekend as we cheer on our national teams at 7 – Go Fiji Go!


Source link

About Thomas Brown

Check Also

Record number of COVID cases reported in 2 Australian states

SYDNEY – The Australian states of Victoria and Queensland reported record levels of new daily …

Leave a Reply

Your email address will not be published. Required fields are marked *