A thread for all things AI

"Silence is golden when you can't think of a good answer."
-Muhammad Ali
Post Reply
User avatar
Posts: 1636
Joined: Fri Feb 20, 2015 4:24 pm
Location: Inland NW, U.S.
Has thanked: 2466 times
Been thanked: 2928 times

A thread for all things AI

Post by Spiritwind »

I’m going to just start this, mainly for myself, so I can get my mind around just how pervasive AI technology is becoming, and try to understand the many aspects of how it can, and is, being used, and what that might mean for us. I actually started collecting articles back in early 2020, but then never came back to it until now. It seems a timely subject to get a better grip on.

Here’s one to get started....

What is Artificial Intelligence Anyway?
By Benedict Delott - December 15, 2016

https://www.thersa.org/discover/publica ... nce-anyway

Artificial intelligence is once again in the media spotlight. But what is it exactly? And how does it relate to developments in machine learning and deep learning? Below we spell out the various interpretations of AI and look back on how the technology has developed over the years.

The semantic quicksand
“The fundamental challenge is that, alongside its great benefits, every technological revolution mercilessly destroys jobs and livelihoods – and therefore identities – well before the new ones emerge.”

So said Mark Carney in a widely reported speech last week, as he referred to the potential impact of an oncoming wave of artificial intelligence.

We’ll pick up on his predictions another time, but for now it’s worth asking what we mean exactly by AI. How does it relate to machine learning and deep learning? And what separates innovations like chatbots from self-service checkouts, self-driving cars from search engines, and factory robots from automatic telling machines?

For all the hype and postulating, there is surprisingly little discussion about the technology itself and how it came to be. Try Googling ‘what is artificial intelligence?’ and you’ll find very little in the way of solid definitions.

This is not a dull point about semantics. If we don’t know what the technology is and how it is manifested, how do we expect to judge its potential effects? And how will we know which industries and occupations are most likely to be transformed?

AI and its different guises
From my own reading of the limited literature, I’d say the following:
First, that artificial intelligence can be broadly defined as technology which replicates human behaviours and abilities conventionally seen as ‘intelligent’.
While many people focus on ‘general AI’ – machines that have equal if not greater intelligence than humans, and which can perform any variety of tasks – very little progress has been made in this domain. Aside from a handful of the most ardent optimists, there is consensus that AI systems which can talk like us and walk like us, and which can essentially pass for humans, are decades from realisation. HAL 9000 and C-3PO remain the stuff of science fiction. 

In contrast, there have been significant and meaningful developments in ‘narrow AI’. These are machines that perform a specific function within strict parameters. Think of image recognition, information retrieval, language translation, reasoning based on logic or evidence, and planning and navigation. All are technologies that underpin services like route mappers, translation software and search engines.

Kris Hammond from Narrative Science usefully groups these tasks into three categories of intelligence: sensing, reasoning and communicating. Explained in his words, cognition essentially breaks down into “taking stuff in, thinking about it, and then telling someone what you have concluded”.

Mobile assistants like Apple’s Siri and Google’s Now make use of all three of these different layers. They begin by using speech recognition to capture what people are asking (‘sensing’), then use natural language processing (NLP) to make sense of what the string of words mean and to pull out an answer (‘reasoning’), and finally deploy natural language generation (NLG) to convey the answer (‘communicating’). It works whether you’re asking for the weather or directions to the nearest coffee shop.

When it comes to robotics – which can be thought of as physical machines imbued with AI capabilities – we should also add a fourth category of movement.

A self-driving car, for example, will sense its environment using a variety of detectors (e.g. spotting a pedestrian walk across the road), deploy reason to decide whether there are any risks (e.g. of hitting the pedestrian), and then implement a necessary movement (e.g. slowing down or altering direction). The same process plays out in other advanced robots, including those found on the factory floors of manufacturers or the wards of hospitals and care homes (see for example Asimo).

Getting off to a slow start
How did we get to this point?
After all, artificial intelligence as a field of research has been around for decades. Interest in AI stretches back to the 1950s, a period when Alan Turing devised the influential Turing Test to determine whether or not a machine could ‘think’. The Dartmouth College convention of 1956 is often cited as the landmark moment when computer scientists came together to pursue AI as a research field in its own right, powered by leading thinkers like Marvin Minsky.

Despite early enthusiasm and significant funding, however, initial progress in artificial intelligence was excruciatingly slow. DARPA, which had pumped millions of dollars into university departments during the 1960s, became particularly frustrated at the lack of headway in machine translation, which it had pinned its hopes on for counter-espionage. Closer to home, the UK’s 1973 Lighthill report raised serious doubts that AI was going to evolve at anything but an incremental pace.

The result was a radical cut in government funding and several prolonged periods of investor disillusion – what became known as the ‘AI Winters’ of the 70s and 80s. The circumstances were not helped by wildly optimistic early predictions, such as Minsky’s claim in 1970 that “[within] three to eight years we will have a machine with the general intelligence of an average human being”.

From the AI Winter to the AI Spring
One of the biggest blocks to progress was the issue of ‘common sense knowledge’. Attempts to create intelligent machines were stymied by the huge expanse of possible inputs and outputs that are associated with a given task, which could not all be anticipated and programmed into a system without a mammoth exercise lasting many years (although the researcher Douglas Lenat has attempted this with a project named Cyc).

Think of language translation, where the hidden meaning of phrases can be lost if words are converted literally from one language to the other.

Unanticipated idioms would regularly throw systems out of kilter. Or consider image recognition, where a mannequin or puppet might be conceived as a person despite this obviously not being the case to a human observer. The ‘combinatorial explosion’ of possibilities in the messy world that is real life was too much for the computers of the day.

Things began to change, however, in the late 1990s and early 2000s. Increased computing and storage power meant AI systems could finally process and hold a significant amount of information. And thanks to the spread of personal computing and the advent of the internet, that lucrative data was becoming evermore available – whether in the form of images, text, maps or transaction information. Crucially, this data could be used to help ‘train’ AI systems using special machine learning methods.

Prior to this new approach of machine learning, many AI applications were underpinned by ‘expert systems’, which meant painstakingly developing a series of if-then rules and procedures that would guide basic decision-making (picture a decision tree or web). These were useful when dealing with a contained task – say, processing cash withdrawals under the bonnet of an ATM – but were not made to handle novel or unanticipated inputs where there could be millions of potential outcomes.

What makes machine learning so transformative is that it works backwords from existing real-world examples. So instead of writing thousands of lines of code, machines are instead fed huge datasets which are then analysed for common patterns, creating a generalised rule that can be used to make sense of future inputs. With image recognition, for example, machine learning algorithms are channelled a large number of pictures, each pre-described and labelled (e.g. ‘mountain’ or ‘house’), and these are then used to create a general rule for interpreting future photos.

The applications of machine learning are almost limitless – from aiding the detection of cancers and radically improving language translation, through to spotting fraudulent behaviours in financial markets and assisting businesses as they recruit workers.

Deep learning as the next frontier
Machine learning is the main reason for the renewed interest in artificial intelligence, but deep learning is where the most exciting innovations are happening today. Considered by some to be a subfield of machine learning, this new approach to AI is informed by neurological insights about how the human brain functions and the way that neurons connect with one another.

Deep learning systems are formed of artificial neural networks that exist on multiple layers (hence the word ‘deep’), with each layer given the task of making sense of a different pattern in images, sounds or texts. The first layer may detect rudimentary patterns, for example the outline of an object, whereas the next layer may identify a band of colours. And the process is repeated across all the layers and across all the data until the system can cluster the various patterns to create distinct categories of, say, objects or words.

Deep learning is particularly impressive because, unlike the conventional machine learning approach, it can often proceed without humans ever having defined the categories in advance, whether they be objects, sounds or phrases. The distinction here is between supervised and unsupervised learning, and the latter is showing evermore impressive results. According to a King’s College London study, deep learning techniques more than doubled the accuracy of brain age assessments when using raw data from MRI scans.

Assuming these innovations continue to progress, the prospects for AI to influence our health and happiness, our economic security and our working lives, are truly mindboggling. Whether or not we are prepared for this new machine age is another question – but we can only begin to answer it by knowing what it is exactly we’re all talking about.

If you have another interpretation of AI, or want to pick up on any of the domains and uses mentioned in this blog, please post a comment below. The more perspectives, the better.
I see your love shining out from my furry friends faces, when I look into their eyes. I see you in the flower’s smile, the rainbow, and the wind in the trees....
User avatar
Posts: 1636
Joined: Fri Feb 20, 2015 4:24 pm
Location: Inland NW, U.S.
Has thanked: 2466 times
Been thanked: 2928 times

Re: A thread for all things AI

Post by Spiritwind »

When AI Becomes a Weapon
https://partners.wsj.com/deep-instinct/ ... -a-weapon/?

What will happen when attackers start using a much more sophisticated AI to their advantage? An AI-based attack can be initiated in three possible ways.

THE AMPLIFIED EFFICIENCY of artificial intelligence (AI) means that once an inference model is available as an outcome of training, AI can be used for a malicious purpose. The malicious intent can be deployed across a far greater number of devices and networks more quickly and efficiently than a malevolent human actor. Given sufficient computing power, an AI-based malware could launch many attacks, be more selective in its targets and more devastating in its impact. The potential mass destruction makes a nuclear explosion sound rather limited.

Currently, the use of AI for attackers is mainly pursued at an academic level, on adversarial networks, and we’re yet to see AI attacks in the wild. However, there’s a lot of talk in the industry about attackers using AI in their malicious efforts, and defenders using machine learning as a defense technology.
There are three types of attacks in which an attacker can use AI:

AI-based cyber-attacks
The malware operates AI algorithms as an integral part of its business logic. This is where AI algorithms are used to detect anomalies and indicate irregular user and system activity patterns. The AI algorithm is trained to identify unusual patterns as indicative of malicious activity that can be used to execute malware, increase or decrease evasion and stealth configurations and communication times. An example of this is a 2018 proof-of-concept tool, developed by a U.S.-based IT multinational, which encrypted ransomware to autonomously decide which computer to attack based on a face-recognition algorithm. This meant that only when the target is recognized (in this case via a facial recognition technique) can the attack take place.

There are other hypothetical examples, where AI could be incorporated into the malware business logic. Consider “Anti-Virtual Machine (VM)” malware: this sophisticated malware will check if it’s being run on a VM to avoid operating its malicious activities in a sandbox, which will reveal if the file is malicious, or to avoid being analyzed by a security researcher, which will reveal how it works. In-order to assist their Anti-VM efforts, malware developers can use AI algorithms to train a VM environment classifier, which would collect details on the environment (such as registry keys, loaded drivers, etc.) and determine whether or not the host the malware is running on is a VM. If it identifies a VM environment the malware can remain dormant, and if it’s not, the malware can be designed to trigger.

AI-facilitated cyber-attacks
The malicious code and malware running on the victim’s machine does not include AI algorithms, but the AI is used elsewhere in the attacker’s environment. An example of this is info-stealer malware, where a lot of personal information is uploaded to a C&C server, which then runs an NLP algorithm to cluster and classify sensitive information as interesting (e.g., credit card numbers). Consider #TheFappening attack of 2014, where some 500 celebrity photos stored in a cloud server were leaked. Should the attack have been supported by AI, its devastation could have occurred on a much larger scale. Computer vision machine-based algorithms could have been used to review the millions of pictures that were leaked, identify which of them contained celebrities and then expose only those images.

Another example of this is spear phishing. In a standard phishing technique, the target is “fooled” by a superficial trustworthy façade that aims to trick the victim into exposing sensitive information, or sending money. In contrast, a spear-phishing attack involves collecting and using a lot of specific information relevant to the target, causing the façade to look even more trustworthy and more relevant. The most advanced spear-phishing attacks require a significant amount of skilled labor, because the attacker must identify suitably high-value targets, research these targets’ social and professional networks, and then generate messages that are plausible within this context. Using AI—and specifically generative NLP models—this can be done at a much larger scale, and autonomously, requiring very little resource outlay.

Adversarial attacks
This is the use of malicious AI algorithms to subvert the functionality of benign AI algorithms, done by using the algorithms and techniques built into a traditional machine-learning algorithm and “breaking” it by reverse-engineering the algorithm. For instance, the Stochastic Gradient Descent is a technique used to train deep-learning models, which can be used by adversaries to generate samples that will be misclassified by machine-learning algorithms. This is the equivalent of placing a sticker in a strategic position on a stop sign, causing it to be misclassified by an image-recognition classifier as a speed-limit sign.

The constructive AI versus malicious AI trend will continue to increase and spread across the opaque border that separates academic POCs from actual full-scale attacks in the wild. This will happen incrementally as computing power (GPUs) and deep-learning algorithms become more and more available to the wider public.

To best defend against an AI attack, you need to adopt the mindset of a malicious actor. Machine-learning and deep-learning experts must become familiar with these techniques in order to build robust systems that will defend against them.

In this technology arms race, deep learning, the most advanced form of AI, is the only option for stopping AI-based malware that will be generated in the developing attack landscape. Learn more about this advanced technology and how it has been applied to cybersecurity.

Nadav Maman, Deep Instinct Co-Founder & CTO, brings 15 years of experience in customer-driven business and technical leadership. He has a proven track record in managing technical complex cyber projects, including design, execution and sales. He has vast hands-on experience with data security, network design, and implementation of complex heterogeneous environments.

Wall Street Journal Custom Content is a unit of The Wall Street Journal Advertising Department. The Wall Street Journal news organization was not involved in the creation of this content.

Below is another good article and the link is still good as of today.

Smart Dust – The Future Of Involuntary Treatment Of The Public
https://www.naturalblaze.com/2017/02/sm ... ublic.html?
I see your love shining out from my furry friends faces, when I look into their eyes. I see you in the flower’s smile, the rainbow, and the wind in the trees....
User avatar
Posts: 1636
Joined: Fri Feb 20, 2015 4:24 pm
Location: Inland NW, U.S.
Has thanked: 2466 times
Been thanked: 2928 times

Re: A thread for all things AI

Post by Spiritwind »

Artificial (synonyms):

Man made

Opposite terms:

Bona fide

A little more on organic:
- being or relating to or derived from or having properties characteristic of living organisms
- relating to or derived from living matter

- lacking the properties characteristic of living organisms

- A hint of the Greek word bios, meaning "life", can be seen in microbe. Microbes, or microorganisms, include bacteria, protozoa, fungi, algae, amoebas, and slime molds. Many people think of microbes as simply the causes of disease, but every human is actually the host to billions of microbes, and most of them are essential to our life. Much research is now going into possible microbial sources of future energy; algae looks particularly promising, as do certain newly discovered or created microbes that can produce cellulose, to be turned into ethanol and other biofuels. https://www.merriam-webster.com/dictionary/microbe

Nanite: a microscopically small machine or robot

And this from https://www.yourdictionary.com/nanites

* Because nanites are so small, they require little in the way of raw materials, just a few molecules here and there.
* Clearly, what nanites will do inside our bodies in the future is almost limitless and will change medicine forever.
* In the future, we will paint surfaces with substances full of nanites that will absorb sunlight and turn it into electricity, transforming any object we paint into a clean energy creator.
* Or how about nanites that process each piece of trash in our garbage and turn it into something useful?
* Or nanites that clean up any toxic chemicals they find and turn them into harmless agents?
I see your love shining out from my furry friends faces, when I look into their eyes. I see you in the flower’s smile, the rainbow, and the wind in the trees....
User avatar
Posts: 1636
Joined: Fri Feb 20, 2015 4:24 pm
Location: Inland NW, U.S.
Has thanked: 2466 times
Been thanked: 2928 times

Re: A thread for all things AI

Post by Spiritwind »

I’ve shared this video before, probably around when it first came out 8 years ago. It’s short, and I have more commentary to add, but will come back to this when I have more time.

I see your love shining out from my furry friends faces, when I look into their eyes. I see you in the flower’s smile, the rainbow, and the wind in the trees....
Post Reply

Return to “General discussions”