Welcome to our

Cyber Security News Aggregator

.

Cyber Tzar

provide a

cyber security risk management

platform; including automated penetration tests and risk assesments culminating in a "cyber risk score" out of 1,000, just like a credit score.

Exclusive: The anatomy of AI, by Keysight’s Jeff Harris

published on 2023-05-04 10:52:36 UTC by Simon Finlay
Content:

Jeff Harris, Vice President, Global Corporate and Portfolio Marketing of Keysight Technologies, explores the ‘anatomy of AI’

It seems like everywhere you look you now see Artificial Intelligence (AI) touted in the unlikeliest products ranging from the advanced to the mundane.

Just the thought of AI powering your products sounds impressive, so of course you want to believe the claims.

However, much of the noise fails to convey what the AI does or why the manufacturer felt so confident about making the claim.

The engineer in me is always curious how things are built. That’s because I hate the concept of a ‘black box’ where we aren’t supposed to understand how calculations are programmed.   

So, let’s open the box and explore the anatomy of AI.

To achieve an artificial intelligence, you first need two main ingredients: First, an ability to measure some parameter with an understanding of what the measurement means and second the ability to learn. The first part is all about metrology, otherwise known as the scientific study of measurement. The second part is called machine learning (ML), which gives systems the ability to recognise when a measurement is different than expected and change an operation without explicitly being programmed.

Ability to Collect Data

Metrology focuses on the deep understanding of a particular measurement. That measurement can be as simple and distinct as voltage, ground, or temperature, or as multi-modal as the functioning of aircraft control surfaces or complex manufacturing assembly lines.

•             Measurement Depth:  Whether you are measuring a single parameter or several, the depth your measurement precision determines the level of programmability you can create. For instance, measuring three volt system to 1/10th of a volt is not as insightful as measuring to 1/1000th of a volt.

•             Data Feed: Measurement data is useful to an algorithm only if it’s made available in a data feed. If, in the example above, a sensor is capable of measuring at 1/1000th but its data feed output is limited to one decimal place due to data bus limitations, that extra precision is not available to the algorithm.

•             Multiple Data Feeds: Whenever possible, measuring more parameters leads to better decision-making. For instance, if you can measure voltage to 1/1000th of a volt as well as temperature, the ability now exists to connect voltage shifts due to temperature changes. 

Getting to Machine Learning

The ultimate ML feeds data from multiple sources into algorithms that mimic the way that humans learn, gradually improving their accuracy. Once you have the data feeds, there are three essential building blocks to achieve ML: an algorithm to interpret the data, a table of expected results with reactive outcomes, and a feedback loop.

•             The Algorithm: The true ‘smarts’ of any machine learning system is its ability to take data feed inputs, run a set of calculations/instructions, and interpret the output. Interpreting means the ability to recognise whether an output calculation is within or outside the expected range and enact new commands according to that output. In the previous example, if a voltage measurement varies far outside the expected range, and the temperature is above nominal, the algorithm might activate an internal fan.  

•             Reactive Outcomes: In its simplest form, expected results can be a ‘look-up’ table of combinations of data feed inputs with a series of reactive command instructions. The more comprehensive the table, the more mature and valuable the ML becomes. More interactive MLs can make changes incrementally such as changing course of a drone to avoid obstacles based on real-time sensing, which requires both continuous sensing and constant adjustment.

•             Feedback Loop: The final element is the feedback loop, which allows the system to verify what it did was sufficient or needs further refinement and enables it to adjust its parameters to improve future performance.

Adding multiple ML capabilities focusing on different aspects of larger systems, as well as adding more sensor data, enables machine learning at a more complex system level.

Very advanced ML can add to its ‘look up tables’ as it encounters new combinations of sensor inputs, enacts variants of its reactive outcome instructions, and measures the feedback of how sufficient the reaction performed.

These become self-adjusting algorithms that derive knowledge from data to predict outcomes. And the more algorithms are trained, the more accurate the output.

Artificial Intelligence

Now that you have trainable algorithms, you are most of the way towards delivering AI. This requires taking the outputs from the collection of ML engines and combining them with sufficient guidelines and iteration for the algorithm to make real-time decisions. 

Each time an AI algorithm processes data, iterates, and considers the iterative response with new data coming in, and uses the combination to determine its output choices, it has achieved decision-making status.

This perpetual cycle enables the AI to keep learning and improving the decision quality.

This entire process can be very simple, like the example of the voltage and temperature sensor loop, or it can be as complex as an attack drone’s flight control system.

The DNA Markers of AI

So how can you predict how well any AI algorithm will perform? Just like in humans, you can look at its DNA markers. In its most basic form, implementing AI enables a machine to replace having a human in the decision loop by simulating how we would sense, process, and react to information and modify a workflow for a given set of conditions. At its core, you should look at three common DNA markers:

1.            How good is the measurement & simulation: understand the manufacturer’s ability to measure and if they have sufficient understanding and experience to create a digital twin of the environment.

2.            Algorithms, analytics & insights: how deep the developer’s knowledge space is of the signal’s core characteristics and how that relates to expected responses will determine the depth of the ‘look-up’ table of expected results.

3.            Knowledge of workflow automation: the understanding, at a system level, of how multiple iterative ML outputs could work together to optimise a desired outcome.

Therefore, the quality of an AI algorithm is a function of its depth – of understanding of the metrology in any given area of measurement – and breadth.

This brings us back to the fact that AI, when well executed, is not an overhyped emerging technology. Instead, it’s the only way engineers can manage the exponential complexity in new designs.

As futurist Gray Scott succinctly says: “There is no reason and no way that a human mind can keep up with an artificial intelligence machine by 2035.” Engineers recognise this and have started on the path of infusing ML and AI across their systems.

AI starts with having smart, motivated engineers that understand measurement science, understand system behaviour expectations sufficiently to create digital twins for developers, and are driven to take engineering to the next level. 

Article: Exclusive: The anatomy of AI, by Keysight’s Jeff Harris - published over 1 year ago.

https://securityjournaluk.com/exclusive-the-anatomy-of-ai-by-keysights-jeff-harris/?utm_source=rss&utm_medium=rss&utm_campaign=exclusive-the-anatomy-of-ai-by-keysights-jeff-harris   
Published: 2023 05 04 10:52:36
Received: 2023 05 04 11:08:24
Feed: Security Journal UK
Source: Security Journal UK
Category: Security
Topic: Security
Views: 1

Custom HTML Block

Click to Open Code Editor